This job board retrieves part of its jobs from: Toronto Jobs | Emplois Montréal | IT Jobs Canada

The New York State wants YOU to start a career here!

To post a job, login or create an account |  Post a Job

Data Engineer | Analytics

Apidel Technologies

This is a Full-time position in New York, NY posted April 29, 2021.

Duration: Full-Time/ Permanent

Responsibilities

  • Collaborate with project stakeholders (client) to identify product and technical re quirements; Conduct analysis to determine integration needs.
  • Use different data warehousing concepts to build a data warehouse for reporting purpose.
  • Assist Analytics/Reporting teams in setting up data pipelines & monitoring daily jobs.
  • Lead and provide guidance to junior members in team. Oversee the project life cycle from intake through delivery.
  • Develop and test ETL components to high standards of data quality and act as hands-on development lead.
  • Oversee and contribute to the creation and maintenance of relevant data artifacts (data lineages, source to target mappings, high level designs, interface agreements, etc.).
  • Ensuring that developer responsibilities are being met by mentoring, reviewing code and test plans, verifying that design best practices as well as coding and architectural guidelines, standards, and frameworks are adhered to by offering guidance, communicating risk, and addressing roadblocks as they arise.
  • Lead in providing operational support for the analytics infrastructure as part of the DevOps model.
  • Research, identify and recommend technical and operational improvements resulting in improved reliability and/or efficiencies in maintaining and/or developing the application.
  • Responsible for Planning, Architecture and Design for journey towards Spark, Hadoop Big Data and cloud solutions like AWS, Databricks, etc.

Job Requirements:

Skills

  • Strong verbal and business communication skills.
  • Strong business acumen & demonstrated aptitude for analytics that incite action.

Education/ experience

  • Master’s degree in statistics, mathematics, computer science/engineering, operations research or related analytics areas; candidates with BA/BS degrees in the same fields from the top tier academic institutions are also welcome to apply; Computer science degree preferred.
  • 4+ years of experience in ETL (or) data engineering role in an analytics environment.
  • Working knowledge of Relational Database Management Systems (RDBMS) like Oracle, Teradata, SQL server etc. (Oracle preferred).
  • Experience working and building real time data pipelines and use API’s.
  • Experience with cloud services (AWS preferred).
  • Expertise in building data pipelines in Big data platforms; Good understanding of Data warehousing concepts.
  • Knowledge of Pyspark, Shell scripting, SQL, Python & some of the standard data science packages (Pandas, Numpy, etc.).
  • Exposure to ETL software like Talend, Informatica, etc. is a Plus; Not Mandatory.
  • Prior Exposure to Big Data Technologies (Spark, Hadoop, Hive & so on) is a Plus; Not Mandatory.