Lead Data Engineer

By October 5, 2018

Adaptive Solutions Group is a premier provider of information technology personnel. We provide a variety of technical professionals available for contract, contract to hire, or direct placement positions to companies in and around the St. Louis, Kansas City, Dallas and Denver Area.

 

We are currently looking for a Lead Data Engineer to join our team.

 

Job Description:

  • As a Lead Data Engineer you would be responsible for developing and deploying cutting edge distributed data solutions. 
  • You will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment and support of applications developed for our business partners.
  • Work inside a team with strong data warehousing skills, including: data cleanup, ETL, ELT and handling scalability issues for enterprise level data warehouse.
  • Implements, troubleshoots, and optimizes distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems.

               

Responsibilities:

  • Provide technical and/or business application consultation to business partners and team members in the areas of functionality, architecture, operating systems and databases for complex application systems.
  • Lead the design of highly scalable and reliable data pipelines to consume, integrate, and analyze large amounts of data from various sources to support the rapidly changing needs of the business.
  • Design and develop code, scripts and data pipelines that leverage structured and unstructured data as well as batch and streaming.  
  • Build automated pipelines and data services to validate, catalog, aggregate and transform ingested data.
  • Build and deliver deployment and monitoring capabilities consistent with DevOps models.
  • Analyze existing systems and architectures for improvement recommendations.
  • Monitor, maintain and optimize production systems.
  • Investigate and resolve incidents reported by users. Identify opportunities to automate, consolidate and simplify platform.
  • Ensures code quality, performs code reviews, and mentors development team members.
  • Ensure users’ expectations are met, gain understanding when desired outcomes are not feasible and provide alternative solutions to meet objective(s).
  • Design and develop software for new functionality, improvements and system longevity.
  • Ensure all documentation of technical architecture and systems are complete.
  • Provide training and guidance to team members and users as required.
  • Must be available to meet schedules of global operation by being available for off hour meetings.
  • Must be able to travel if necessary.
  • Maintain regular and predictable attendance.
  • Perform other duties as assigned.

 

Requirements:

  • Bachelor’s degree in Computer Science or equivalent education and experience
  • 10+ years of progressive experience in enterprise-level IT functions.
  • 5+ years of experience in the Hadoop platform.
  • 5+ years of experience in data architecture.
  • 5+ years of experience in use of open source tools such as: Hadoop, Sqoop, Hive, HBase, Spark, Flume, Storm, Python, Kafka, Hortonworks, Apache, or Cassandra.
  • 5+ years working with enterprise-grade data integration, database management and engineering solutions.
  • Have a strong background in SQL / Data Warehousing (dimensional modeling).
  • Solid knowledge of Java (Or similar JVM language such as Scala, Groovy, Ruby, etc) and/or Python.
  • Solid knowledge of the following technologies: HTTP, SSL/TLS, REST, SQL, JSON, and Excel.
  • Relational database experience-  MySQL, Oracle, SOLR, NoSQL – Postgres, Mongo DB and Hive.
  • Knowledge of Continuous Integration environment such as Jenkins, CruiseControl, Continuum, Travis, etc…
  • Knowledge of Test Driven Development processes and tooling such as JUnit, Mocha, Jasmine, or Protractor.
  • Knowledge of DevOps-style deployment tools such as Docker, Ansible, or Vagrant
  • Experience in working with the Agile environments (i.e., Scrum and Kanban).
  • Experience in dealing with the performance tuning for very large-scale apps.
  • Proven hands on experience designing, building, configuring and supporting Hadoop Ecosystem in a production environment.
  • Data stores – architecting data and building performant data models at scale for Hadoop/NoSQL ecosystem of data stores to support different business consumption patterns off a centralized data platform.
  • Data acquisition technologies including batch and streaming analytics architectures implemented in Hadoop (such as Spark Streaming, Storm, NIFI, Kafka, etc), including ETL from RDBMS and ERP to Hive with data validation to ensure data quality.
  • Spark/MR/ETL processing, including .NET, Java, Python, Scala, Talend; for data analysis of production Big Data applications.
  • Advanced interpersonal skills, demonstrating an ability to lead.
  • Advanced ability to translate business needs and problems into systems’ design and technical solutions.
  • Expert knowledge of structured and object oriented programming, relational database design, ORM tools, operating systems, networking concepts, and systems integration.
  • Complex analytical and problem-solving skills.
  • Broad business and finance related knowledge.
  • Advanced oral and written communication skills.
  • Ability to work well within an Agile team environment.
  • Ability to multi-task.
  • Ability to work outside normal business hours with users in different time zones.
  • Ability to work well within a team environment and participate in department/team projects.
  • Ability to balance detail with departmental goals/objectives.
  • Advanced ability to translate business needs and problems into viable/accepted solutions.
  • Advanced skills in customer relationship management and change management.
  • Ability to liaise with individuals across a wide variety of operational, functional, and technical disciplines.

 

Preferred:

  • Master’s degree and/or LOMA certification.
  • 5+ years’ experience in programming/systems analysis.
  • Experience with rules engine technologies.

 

Adaptive Solutions Group is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, and veteran or disability status.

 

Adaptive Solutions Group offers a competitive compensation and benefits package that includes medical, dental, STD/LTD, life insurance coverage, 401k, paid vacation and holidays.

Apply Now