




Job Summary: We are seeking a Data Engineer for Telefónica Tech’s BI team, responsible for end-to-end data management—from its origin to final exploitation—within a multidisciplinary environment. Key Responsibilities: 1. Integration and acquisition of new data sources and ETL/ELT processes. 2. Development of Power BI dashboards and reports for decision-making. 3. Establishment and maintenance of data governance policies. **What is Telefónica Tech?** Telefónica Tech is the leading digital transformation company within the Telefónica Group. We offer an extensive portfolio of integrated technological services and solutions in Cybersecurity, Cloud, IoT, Big Data, Artificial Intelligence, and Blockchain, supporting our clients throughout their digital transformation journey. We are a team of over 6,200 bold professionals working daily from various locations worldwide to achieve excellence through transparent leadership and a strong team spirit. If you identify with our core values, we look forward to meeting you! www.telefonicatech.com **What do we do in the team?** The BI team at Telefónica Tech carries out several key responsibilities—all centered on end-to-end data management, from its origin to final exploitation. This begins with administration of the corporate data warehouse built on Snowflake. This entails not only maintaining the database but also optimizing it for efficient querying, ensuring data security, and overseeing integration of new data sources. As Data Engineers within our team, we are responsible for integrating and acquiring new data sources, as well as executing ETL/ELT processes into our data warehouse. To achieve this, we rely on tools within the Azure cloud ecosystem—primarily Databricks and Data Factory. Databricks, built on Apache Spark, enables advanced data analytics and empowers us to own our ingestion processes. Meanwhile, Azure Data Factory allows us to automate and orchestrate data flows from multiple sources, transform data as needed, and load it into Snowflake for subsequent analysis and visualization in Power BI. Regarding Power BI: we develop dashboards and reports that enable informed business decisions. Data governance is another critical area of our responsibility. Our mission is to establish and maintain policies and procedures governing appropriate data usage across the organization—including ensuring data quality, confidentiality, integrity, and availability. **What will your day-to-day be like?** Your role will involve active participation in designing end-to-end data engineering solutions—from understanding use cases and data sources through to developing data models. Your responsibilities span from data source integration to data exploitation within the data warehouse using Spark in Databricks and SQL in Snowflake. You will also maintain and evolve our Spark-based ingestion developments for data capture. All this takes place within a highly skilled, multidisciplinary team, offering opportunities to apply a broad range of cutting-edge technologies for designing data management systems—using diverse approaches to data processing and modeling. Your day-to-day: * Integration of new data sources. * Creation of new data pipelines. * Development of iterative enhancements to Spark-based ingestion processes. * Design of data exploitation in the data warehouse. * Data modeling for new use cases using SQL. **And for this, we believe it would be ideal if you had…** **Experience** * Minimum 1 year of experience building data pipelines with Python and PySpark. * Experience in Azure data-oriented environments: Azure Databricks, Azure Data Factory, and Azure DevOps. * Experience developing ETL/ELT processes, incremental ingestions, and analytical data modeling. * Experience with CI/CD environments and version control (Git). * Knowledge of Spark process optimization and handling large-scale datasets. * Familiarity with designing and implementing Medallion-style data architectures (Bronze/Silver/Gold). **Education** + Degree in Computer Engineering, Telecommunications Engineering, Vocational Training, or equivalent professional experience. * Training courses aligned with our technology stack: Python, Spark, Git, Databricks, Snowflake, and/or Azure. **Technical Skills** * Python and/or Spark: intermediate-to-advanced level. * Git/DevOps: intermediate level. * SQL and/or PL/SQL: intermediate level. * Strong communication and teamwork skills. * Experience with tools such as Snowflake, Databricks, Azure Data Factory, or DBT is valued. **The skills best suited for this role, aligning with the team and project, would be:** * Organizational and communication skills; customer orientation. * Ability to build trust-based relationships with customers and maintain composure under pressure. * Sense of urgency. * Synthesis and executive capability. **What do we offer?** * Work-life balance measures and flexible hours. * Continuous training and certifications. * Hybrid remote work model. * Attractive social benefits package. * Excellent dynamic, multidisciplinary work environment. * Volunteering programs. **#WeAreDiverse #WePromoteEquality** We firmly believe that diverse and inclusive teams are more innovative, transformative, and deliver better results. Therefore, we promote and guarantee inclusion of all individuals regardless of gender, age, sexual orientation or identity, culture, disability, or any other condition. We want to meet you!


