···
Log in / Register
Sr. Data Engineer - Remote - Olivos - 1727
Negotiable Salary
Indeed
Full-time
Onsite
No experience limit
No degree limit
Pje. Centenario 130, C1405 Cdad. Autónoma de Buenos Aires, Argentina
Favourites
New tab
Share
Some content was automatically translatedView Original
Description

What does the company do? Startup belonging to a renowned pharmaceutical laboratory. Since 1997, they have provided access to scientific updates and medical education through their Virtual Campus for healthcare professionals. Leading platform in medical training in Spanish and Portuguese. What do you need to join the team? On a personal level: Clear communication and ability to collaborate with technical and non-technical teams. Proactivity, autonomy, and commitment to continuous improvement. Clear documentation and shared knowledge. Humanistic and cultural foundation. Democratization of knowledge. Diversity. Innovation. Trust. Commitment. Quality. On a technical level: 5 years of experience in Data Engineering roles in cloud environments (AWS) \| Mandatory Holistic vision of the data lifecycle: ingestion, transformation, availability, and monitoring. Advanced proficiency in AWS: Glue (Jobs and Catalog), S3, Athena, Lambda, IAM. \| Mandatory Experience integrating data from multiple sources (APIs, databases, files). Programming in Python for pipelines and process automation. \| Mandatory Advanced knowledge of SQL and databases such as PostgreSQL, MariaDB, MongoDB. Infrastructure management as code using Terraform. Version control and CI/CD with Git, GitHub/GitLab. Knowledge of data security and governance (roles, auditing, access control). Experience operating production pipelines with monitoring and alerts. Handling optimized storage formats such as Parquet. Desirable skills: Experience with Microsoft Fabric, Azure Synapse, One Lake, and Azure Data Factory. Familiarity with Medallion architectures and Delta or Iceberg formats. Use of tools like Airflow, Step Functions, or AWS Event Bridge for orchestration. Knowledge of DBT for modular SQL data transformation. Experience with PySpark and handling large volumes of data. Knowledge of monitoring tools such as CloudWatch or similar. What will you do? As a Senior Data Engineer: Design and maintain Data Lake, LakeHouse, and Data Warehouse architectures using AWS services. Develop and optimize serverless ETL/ELT pipelines using Python, AWS Glue, Lambda, and Step Functions. Manage metadata and data governance with AWS Glue Catalog, ensuring traceability and discoverability. Integrate data from external APIs, databases (PostgreSQL, MariaDB, MongoDB), and files in S3\. Define and implement large-scale data quality, validation, and normalization strategies. Ensure efficient storage usage through FinOps practices (S3 Glacier, etc.). Apply security, access, and auditing policies on infrastructure and data. Document and version processes, models, and data flows with Git, following CI/CD best practices. Provide technical judgment to select tools, design solutions, and improve existing architectures. Collaborate with Analytics, Product, and Data Science teams to put data into action. Promote a strong and collaborative technical culture by sharing best practices with other areas of the data and technology team. Actively participate in sprints and task planning under agile methodology (Scrum) using tools like Jira. What is the challenge of the position? As a Senior Data Engineer, your mission will be to design and operate modern, robust, and scalable data solutions that drive strategic decision-making, process automation, and data intelligence-based product development. You will be a key player in the evolution of our DataLake, Data Warehouse, and LakeHouse architectures, integrating diverse internal and external data sources. You will work with leading technologies (AWS, Python, Terraform) and have a direct impact on how data is built and consumed within the organization. Who will you work with? You will be part of the Data area, working alongside data governance profiles and serving as the organization's principal Data Engineer, supported by two outsourced analysts. What tools will you work with? The tools you will use are: AWS Glue, Lambda and Step Functions, Python, Cloudwatch, Terraform, IAM, S3, and relational and non-relational databases. When and where will you work? You will work full-time, 100% remotely and flexibly, but you must have availability to occasionally attend training sessions or team meetings at the office located in Olivos. What do they offer? Fully remote setup 1 flexible Friday per month Birthday free day OSDE health insurance Corporate discounts on products and services Agreements for courses and postgraduate studies. 15 business vacation days (21 consecutive) \+ year-end holiday break. On-site cafeteria when attending Gym / Soccer Fields / Padel Courts at offices located in Olivos Charter and parking when attending Points to redeem at Le Pain Quotidien, located in the office. What stages does the selection process consist of? You will have an initial interview with our Recruiter, Carla Carrizo, to discuss your professional experiences and interests. Then, you will participate in meetings with our client: + An HR session to learn about the culture + A technical session to understand project details + Finally, an in-person meeting so you can visit the workplace.

Source:  indeed View original post
Sofía González
Indeed · HR

Company

Indeed
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.