




Position Summary: We are seeking a Data Engineer to design, build, and optimize large-scale data processing solutions for public administration projects, delivering strong technical expertise and analytical insight. Key Highlights: 1. Design and development of data processing systems and ETL pipelines 2. Administration and optimization of data environments and platforms 3. Participation in innovative technology and digital transformation projects We seek a **Data Engineer** capable of designing, building, and optimizing large-scale data processing solutions, bringing solid technical expertise and analytical vision to public administration–focused projects. The selected candidate will join a **temporary 4-month project**, contributing to the development of data architectures and processes within a demanding and collaborative technology environment. **Key Responsibilities:** * **Design and development of data processing systems**, including ETL pipelines for large volumes of information. * **Data processing, integration, and transformation** using languages such as Python, R, or Scala and APIs. * **Administration and optimization of data environments and platforms**, especially SQL/NoSQL databases and Big Data ecosystems (Hadoop, Cloudera). * **Development and automation of distributed analysis and processing workflows**. * **Implementation of data security and governance measures**, including role-based access control, traceability, and regulatory compliance. * **Management of repositories and version control** using tools such as Git and GitLab. * **Preparation of technical documentation and reporting** for various project stakeholders. **Minimum Requirements:** * University degree in a STEM field, recognized in Spain (Computer Science, Mathematics, or equivalent). * Minimum of **8 years’ experience in IT projects**. * At least **5 years’ experience in designing and developing data processing systems**, and in creating and maintaining ETL processes. * Proficiency in **data modeling, programming, and integration**, using Python, R, or Scala and API components. * Experience in **security and best practices**, including role-based access control. * Advanced use of **Git, GitLab, and code repositories**. **Desirable Requirements:** * Experience with the **Cloudera ecosystem**. \[ibm.com] * Experience with **Hadoop technologies** (NiFi, Sqoop, Kafka, Spark, Oozie, Airflow, Hive, Kudu, Knox, Impala, HBase, or Arcadia Data). **Personal Competencies:** * **Analytical thinking and structured approach** to solving complex problems. * **Clear communication skills** with both technical and functional teams. * **Results orientation and attention to detail**, especially in critical data environments. * **Proactivity, technical autonomy, and commitment** to continuous improvement. * **Teamwork capability** in multidisciplinary environments. **What We Offer:** * **Participation in a 4-month project** within a high-impact technology environment. * A **dynamic and collaborative work environment**. * Opportunities for **professional growth, training, and development** during the project. * Involvement in **innovative technology and digital transformation projects**. * An organization committed to employee **well-being and satisfaction**. * Access to **flexible compensation**, according to internal policies. If you want to join a team where **your talent matters, your ideas count, and your professional development is a top priority**, please submit your CV and a cover letter outlining your experience in tenders and relevant projects.


