Apply now »

Pyspark Developer |6 TO 12 Years | Pan India

Job Description

Pyspark+SQL
 
Proficient in leveraging Spark for distributed data processing and transformation.
Skilled in optimizing data pipelines for efficiency and scalability.
Experience with real-time data processing and integration.
Familiarity with Apache Hadoop ecosystem components.
Strong problem-solving abilities in handling large-scale datasets.
Ability to collaborate with cross-functional teams and communicate effectively with stakeholders.

Primary Skills

Pyspark
SQL

 

Ref. code:  35661
Posted on:  Feb 5, 2025
Experience Level:  Experienced Professionals
Contract Type:  Permanent
Location: 

Noida, IN

Brand:  Capgemini
Professional Community:  Software Engineering

Apply now »