Data Engineer

Location: Washington, District Of Columbia - Remote
Category: Data
Employment Type: Contract
Job ID: 15287
Date Added: 03/16/2023

Apply Now

Fill out the form below to submit your information for this opportunity. Please upload your resume as a doc, pdf, rtf or txt file. Your information will be processed as soon as possible.

* Required field.
The Data Engineer will be part of the Data Science and Engineering Technologies team and will be filling in for a data engineer must be able to analyze, design, develop, integrate, run, and support various GCP ETL and Data related jobs from ETL, Data Warehouse to across multiple technologies and architectures involving various technologies including application servers, databases, logs, APIs for external data sources and operating systems.

This position will be required to:

  1. Work with Business Owners, Data and Business Intelligence teams [On Looker/Tableau etc] to identify requirement on new and existing data sources and implement ETL logics from various different types of interfaces that we extract from – APIs, Web Services, Databases, external and On-Prem databases and warehouses.
  2. Work with business users and technical designers to assist in efficient data model designs that meet business unit requirements involving integrations of various ACS technical data from systems and platforms.
  3. Work with management, project managers and other lead developers to design and develop pipelines and ensure data accuracies.
  4. Lead and participate in troubleshooting and fixing major system problems in core data systems and supplemental data pipelines as well.
  5. Understand relationship between GCP products well – Primarily Data Fusion, Big Query and Looker, and demonstrate experience in Data Fusion [comparable ETL acceptable] and Google Big Query [Comparable DW acceptable].
  6. Provide strong leadership and mentoring for less senior personnel in the areas of design, implementation, and professional development.
  7. Where required, effectively delegate tasks to development teams of Software Engineering, providing guidance and proper knowledge transfer to ensure that the work is completed successfully.
  8. Be flexible to work during non-Business hours.

The ideal candidate will:

  1. Have experience with Data Fusion or Equivalent, Big Query or Equivalent, SQL server, scripting in Java/Python that works well in GCP products, and their respective practices.
  1. Python experience is a plus.
  2. Develop data ETL pipelines that meets both the functional and non-functional requirements, including performance, scalability, availability, reliability and security.
  3. Have experience with writing code in Java, in order to work on data extracts that require cleanup.
  4. Have a working knowledge of XML, JSON and other forms of data streaming artifacts and related technologies in a Java/Python environment.
  5. Have strong written and verbal communication skills
  1. Able to multi task on various streams of the entire data process.

Education and Experience and Technical Requirements:

Bachelor’s degree or equivalent experience. 5+ years with proven results in system development, implementation, and operations is required. Strong understanding of design patterns with a focus on tiered, large-scale data systems.