Hotline: 678-408-1354

Big Data Engineer (Java/Scala, MapReduce/Spark, ETL)

Key Skills: Java/Scala, MapReduce/Spark

Responsibilities

  • Design a generic framework for high throughput and streaming data ingestion, curation and linkage for our Data Fabric – a Data as a Service platform intended to be configured/extended for deployments across the globe
  • Develop high throughput and streaming data pipelines using our Lambda Architecture; Internal bulk sources to include server logs, file uploads, database logs, relational sources and CDC; streaming sources to include web sockets, HTTP client callbacks, and kafka events
  • Develop rules-driven, data transformation layer that can be rapidly configured / deployed at business units and geographies globally
  • Build components for standardization, data curation, identity resolution and end-to-end traceability at dataset and record levels
  • Develop data provisioning components to deliver data for operational analytics as well as research purposes
  • Develop ultra-low latency micro services to serve data from the State Container within 10 milliseconds

Qualifications

  • 5+ years hands-on experience as Software Developer/Engineer
  • 4+ years Java/Scala development background
  • 1+ year experience working with native MapReduce and/or spark
  • Desired – 2+ years data engineering/ETL development experience working with data at scale
  • Desired – 2+ experience with Heroku or Docker
  • 1+ year experience across one or more of the following Hadoop, MapReduce, HDFS, Cassandra, HBase, Hive, Flume, Sqoop, Spark, Kafka, etc.
  • Bachelor’s degree in Computer Science, Engineering or quantitative discipline

Job Type: Contract

Email Me Jobs Like These
Share this job

Contact Us

Eltas EnterPrises Inc.
3978 Windgrove Crossing
Suite 200A
Suwanee, Georgia
30024, USA
contact@eltasjobs.com

Subscribe to our Newsletter