




Job Summary: We are seeking a Data Engineer to design, implement, and optimize data pipelines, analytical models, and collaborate on the evolution of the Lakehouse architecture—ensuring dataset quality. Key Highlights: 1. Design and implementation of data ingestion pipelines 2. Collaboration on Lakehouse architecture and Medallion model evolution 3. Participation in data platform construction and evolution Join Marco’s team and co-create strategies and solutions that shape people’s lives in the Trade Marketing industry. **What will be your responsibilities?** * Design and implement data ingestion pipelines from diverse sources (APIs, operational databases, ERP systems, etc.). * Integrate new data sources into the corporate platform. * Optimize existing pipelines and processes to improve performance, efficiency, and scalability. * Design and implement analytical data models optimized for business consumption. * Develop data transformations using SQL, Python, and PySpark. * Collaborate on the evolution of the Lakehouse architecture and Medallion model (Bronze, Silver, Gold). * Implement data quality controls and frameworks. * Participate in automation, data governance, and MLOps initiatives. * Collaborate with BI, platform, and product teams to ensure dataset availability and quality. **What profile are we looking for?** **Education** * Advanced student or graduate in Computer Science, Information Systems, Computing Sciences, or related fields. **Experience** * 2–4 years of experience in Data Engineering, BI, or ETL development roles. * Experience working with cloud data platforms (Azure, AWS, or GCP). * Experience building and optimizing production data pipelines. **Technical Skills (Mandatory)** * Advanced SQL (CTEs, window functions, query optimization). * Python for data processing and transformation. * Experience designing ETL/ELT processes. * Dimensional data modeling (star schema). * Git usage for code versioning. **Desirable Skills** * Data Lake / Lakehouse architectures. * Streaming or real-time data processing. * Pipeline orchestration tools. * Experience with platforms such as Databricks, Snowflake, BigQuery, or Synapse. * Experience integrating systems such as ERP, CRM, or APIs. * Knowledge of MLOps or production Machine Learning architectures. **What do we offer?** * Opportunity to participate in building and evolving the organization’s data platform. * Exposure to architectural decisions and modern technologies. * Collaborative work with BI, platform, and product teams. * Hybrid work mode. **Your path to joining our team:** **Initial Pre-screening:** Upon applying, you’ll answer a series of initial questions to determine if you meet the minimum required criteria. **Interview with AVI and Brief Call:** If you pass pre-screening, you’ll have an interview with AVI, our recruitment bot, followed by a short call (under 20 minutes) with our recruiter. **Team Interview:** If you progress further, you’ll have an interview with the team you’re applying to. We are committed to making this process agile, transparent, and enriching. We look forward to welcoming you to our team! Learn more about the impact of our XPERS across the Americas here https://www.instagram.com/talentmarcomkt/


