Apply now »

Data Integration Engineer

Intake Job description key responsibilities of the role Key skills (Key words) Technical Skills Proficiency(Explain 3 level of proficie Experience
Pipeline Development: Develop and maintain data ingestion solutions, connecting diverse source systems to cloud-based data warehouses (primarily AWS/Snowflake). Technology Utilization: Use tools such as Python, Bash, Kafka, Striim, Fivetran, Matillion, and SSIS to build scalable pipelines. Data Quality & Governance: Ensure data accuracy, consistency, and compliance with privacy policies, establishing robust data quality controls. Collaboration & Support: Work with DevOps, Product, and QA teams to optimize performance, provide 24/7 on-call support, and manage data infrastructure. System Optimization: Implement data partitioning, indexing, and compression strategies to enhance data retrieval and processing. Required Qualifications & Skills Experience: Usually 3–5+ years in data engineering, data warehousing, or cloud-based data solutions. Technical Skills: Expertise in SQL, data modeling (dimensional/data vault), Python scripting, and AWS cloud platforms. Tools: Experience with tools like Snowflake, Fivetran, Kafka, or Informatica is frequently required. Education: Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field. Methodologies: Experience with Agile methodologies and matrixed organizational structures Key responsibilities include creating batch and real-time streaming data solutions using tools like Kafka, Fivetran, Informatica, Matillion, SSIS, and Snowflake to ensure seamless data flow across systems Fivetran, Matillion, SSIS    5+yoe 

 

Job Description

Your role

As a Data Integration Engineer at Capgemini, you will contribute to the design, development, and maintenance of scalable data integration solutions that support enterprise analytics and digital transformation initiatives. Working closely with senior engineers and cross-functional teams, you will help deliver reliable data pipelines and ensure data quality across systems.

In this intermediate-level role, you are expected to apply solid technical skills, a collaborative mindset, and a willingness to learn while taking ownership of assigned components within data projects. You will play a key role in translating business requirements into practical data solutions and continuously improving data processes.

 
Develop and maintain data integration pipelines using cloud platforms (AWS, Snowflake)
Support the design and implementation of ETL/ELT workflows
Ensure data accuracy, consistency, and performance across systems
Collaborate with data architects, analysts, and business stakeholders
Troubleshoot and resolve data-related issues in a timely manner
Contribute to technical documentation and best practices
Continuously enhance skills and adopt new data technologies
 
Your profile
Data-driven mindset — ability to translate business needs into efficient, scalable data solutions with clarity and impact
Collaborative approach — strong teamwork and communication skills to work seamlessly across global, multicultural environments
Continuous learning agility — curiosity and commitment to staying current with evolving cloud, data, and integration technologies
Problem-solving orientation — proactive in identifying issues, optimizing processes, and delivering high-quality outcomes
Ownership and accountability — takes responsibility for deliverables, ensuring reliability, performance, and continuous improvement
 
What you'll love

Empowered Careers with Purpose

Work on meaningful projects that use technology to solve real-world challenges.

Be part of a company that values sustainability, inclusion, and digital equity.

Contribute to building a better future for people, planet, and society

Ref. code:  475321
Posted on:  May 12, 2026
Experience Level:  Experienced Professionals
Contract Type:  Permanent
Location: 

Aguascalientes, MX

Brand:  Capgemini
Professional Community:  Data & AI

Apply now »