




DESCRIPTION **Main Responsibilities:** * Design, develop, and optimize data pipelines using SQL and Databricks. * Implement data ingestion and transformation workflows using Spark in distributed environments. * Ensure data governance, traceability, and integrity across all stages of the data architecture. * Collaborate with Data Engineering, Risk, Technology, and Business teams to deliver integrated data solutions. * Contribute to automating manual processes and modernizing legacy reporting tools through scalable code. REQUIREMENTS **Required Experience:** * Minimum of 5 years of hands-on experience in data engineering or data management within the financial sector or big data environments. * Strong experience in SQL development, including complex queries, optimization, CTEs, and window functions. * Practical experience with Databricks, Spark, or other distributed data platforms. **Education:** * University degree in STEM fields (Engineering, Computer Science, Mathematics, etc.) with a focus on data and analytics. * Additional training or certifications in data platforms or cloud-based tools will be valued. **Languages:** * Advanced level of English. **Technical Knowledge and Skills:** * Advanced proficiency in SQL development (essential requirement). * Experience using Databricks, Spark, and cloud-based data ecosystems. * Solid understanding of data modeling, normalization, and data quality practices. * Advanced skills in Excel, VBA, and the Microsoft Office suite.


