Data Engineer

Job Description

We are looking for a Data Engineer to join our Data Lab division.

The resource will be responsible for designing, developing and maintaining end-to-end data pipelines for Big Data analytics. He/she will work closely with other engineers and analysts to ingest, transform, analyze and visualize data at scale.

Responsibilities:

  • Design, develop and implement data pipelines using Apache Spark, Hadoop and other Big Data tools.
  • Develop and maintain ETL (Extract, Transform, Load) to load data from different sources into data warehouses and data lakes.
  • Optimize the performance of data pipelines and data storage systems.
  • Work with data scientists and analysts to understand their needs and develop data analytics solutions.
  • Develop and maintain reports and dashboards for data visualization.
  • Ensure data quality, security and integrity.
  • Keep up to date with Big Data technologies and analytics tools.

Key Requirements:

  • 2+ years of work experience as a Data Engineer.
  • Experience with Apache Spark, Hadoop, Hive, Pig, and other Big Data tools.
  • Experience with GCP (Google Cloud Platform) and managing cloud data warehouses and data lakes.
  • Knowledge of SQL and scripting languages ​​such as Python or Bash.
  • Ability to design and develop end-to-end data pipelines.
  • Ability to optimize the performance of data pipelines and data storage systems.
  • Ability to work independently and as part of a team.

Qualifications:

  • Bachelor's Degree in Computer Science, Computer Engineering, or Mathematics.
  • Fluent in English.

Desirable Experience:

  • Experience with GCP (Google Cloud Platform) and managing cloud data warehouses and data lakes.
  • Experience with machine learning and deep learning tools.
  • Knowledge of Docker and Kubernetes.
  • Experience with CI/CD (Continuous Integration/Continuous Delivery).

Location: Avellino

Postulación espontánea

Este sitio está protegido por hCaptcha: Privacy Policy - Terms of Service
* Campos obligatorios