




Job Summary: BBVA seeks a hybrid profile with strong technical and business capabilities for the full data lifecycle—from extraction to visualization—and evolution toward ML. Key Highlights: 1. Design and build efficient data models using AWS Athena and SQL. 2. Create automated and scalable workflows with SageMaker and PySpark. 3. Act as a bridge between development teams and business stakeholders. **Excited to grow your career?** BBVA is a global company with more than 160 years of history that operates in more than 25 countries where we serve more than 80 million customers. We are more than 121,000 professionals working in multidisciplinary teams with profiles as diverse as financiers, legal experts, data scientists, developers, engineers and designers. **About the job:** We seek a hybrid profile with strong technical and business capabilities. The candidate will be responsible for the full data lifecycle: from cloud-based extraction and modeling (AWS), automation of pipelines, to effective visualization for decision-making. In the medium term, they will contribute to the evolution toward predictive and Machine Learning models. **Main Responsibilities (The "What You’ll Do")*** **Data Engineering and Modeling (AWS):** Design and build efficient data models using **AWS Athena** and advanced **SQL**. * **Pipeline Development (ETL/ELT):** Create automated and scalable workflows using **SageMaker** and **PySpark** scripts to ensure continuous data updates. * **Visualization and BI:** Translate complex data into intuitive, high-impact dashboards using **MicroStrategy** and **AWS QuickSight**. * **Business Interaction:** Act as a bridge between development teams and business stakeholders, translating requirements into technical solutions. * **ML Evolution:** Lay the groundwork for future implementation of Machine Learning and AI models, identifying opportunities where these technologies deliver real value. **Requirements:** **1\. Technical and Cloud Experience:*** Demonstrable experience working with the **AWS** ecosystem (specifically Athena, S3, Glue, or SageMaker for processing). * Advanced proficiency in **SQL** for complex queries and optimization. * Solid experience programming in **Python** and **PySpark** for large-scale data manipulation. * Ability to orchestrate and automate data pipelines. **2\. Visualization and Storytelling:*** Proven experience designing visualizations in enterprise BI tools, preferably **MicroStrategy** or **AWS QuickSight**. * Ability to "tell a story" with data—not just display numbers. **3\. Soft Skills and Communication:*** Excellent verbal and written communication skills. Must be able to explain pipeline logic or business insights to both Developers and Commercial Managers. * Experience working in Agile methodologies. **Desirable Requirements:*** Theoretical or practical knowledge of predictive modeling techniques (regression, time series, clustering). * Genuine interest in deepening expertise in Machine Learning and Artificial Intelligence as the data infrastructure matures. * Experience in the financial sector. **Candidate Profile:** We do not seek only an academic, but a **"Builder"**—someone who enjoys both writing PySpark code and architecting cloud infrastructure, as well as presenting strategic findings in a board meeting. **Skills:** Analytical Work, Data Science


