Software Developer (Hadoop-Spark)
Role Overview:
We are looking for a Software Developer with experience in Hadoop, Spark, and cloud platforms. You will be responsible for building and optimizing data pipelines, working with relational and NoSQL databases, and deploying data applications to cloud environments. The role requires a passion for data engineering and an interest in developing both batch and streaming data pipelines.
Key Responsibilities:
- Design, build, and maintain scalable data pipelines and architectures
- Work with both relational and NoSQL databases to handle structured and unstructured data sources
- Develop and deploy applications on cloud platforms such as Azure or AWS
- Utilize Hadoop and Spark for distributed data processing
- Implement efficient batch and streaming data engineering pipelines
- Collaborate with cross-functional teams to deliver high-quality data solutions
- Ensure data security, quality, and reliability across all deployments
- Optimize data workflows for performance, scalability, and reliability
- Use SQL for querying and managing data effectively
What We’re Looking For:
- 1-3 years of experience in building and maintaining data pipelines and architectures
- Proficiency with SQL and working knowledge of relational databases (e.g., PostgreSQL, MySQL)
- Experience with NoSQL databases (e.g., MongoDB, Cassandra)
- Experience with Hadoop and distributed data processing platforms like Spark
- Familiarity with cloud platforms (Azure, AWS) for deploying and scaling data applications
- Ability to handle both structured and unstructured data sources
- Knowledge of batch and streaming data processing techniques
- Strong analytical and problem-solving skills
- A collaborative and team-oriented mindset with excellent communication skills
Singapore, SG