Job code
IRC274089
Published on 27 August 2025

Data Engineer IRC274089

Designation

Associate Consultant

Oblasti

Engineering

Skúsenosti

5-10 years

Lokácie

India - Bangalore

Skills

Airflow, Apache Spark, Hive, Python, Spark

Formy práce

On-site

Apply

Description

1. Data engineer with 6+ years of hands on experience working on Big Data Platforms
2. Experience building and optimizing Big data data pipelines and data sets ranging from Data ingestion to Processing to Data Visualization.
3. Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and steaming data processing
4. Good experience in any one programming language -Scala/Python , Python preferred.
5. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins,Views etc
6. Experience in using Kafka or any other message brokers
7. Configuring, monitoring and scheduling of jobs using Oozie and/or Airflow
8. Processing streaming data directly from Kafka using Spark jobs, expereince in Spark- streaming is must
9. Should be able to handling different file formats (ORC, AVRO and Parquet) and unstructured data
10. Should have experience with any one No SQL databases like Amazon S3 etc
11. Should have worked on any of the Data warehouse tools

Requirements

 

1. Data engineer with 6+ years of hands – on experience working on Big Data Platforms
2. Experience building and optimizing Big data data pipelines and data sets ranging from Data ingestion to Processing to Data Visualization.
3. Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and steaming data processing
4. Good experience in any one programming language -Scala/Python , Python preferred.
5. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins,Views etc
6. Experience in using Kafka or any other message brokers
7. Configuring, monitoring and scheduling of jobs using Oozie and/or Airflow
8. Processing streaming data directly from Kafka using Spark jobs, expereince in Spark- streaming is must
9. Should be able to handling different file formats (ORC, AVRO and Parquet) and unstructured data
10. Should have experience with any one No SQL databases like Amazon S3 etc
11. Should have worked on any of the Data warehouse tools like AWS Redshift or Snowflake or BigQuery etc
12. Work expereince on any one cloud AWS or GCP or Azure

Good to have skills:

1. Experience in AWS cloud services like EMR, S3, Redshift, EKS/ECS etc
2. Experience in GCP cloud services like Dataproc, Google storage etc
3. Experience in working with huge Big data clusters with millions of records
4. Experience in working with ELK stack, specially Elasticsearch
5. Experience in Iceberg, Hadoop MapReduce, Apache Flink, Kubernetes etc

Job responsibilities

1. Data engineer with 6+ years of hands on experience working on Big Data Platforms
2. Experience building and optimizing Big data data pipelines and data sets ranging from Data ingestion to Processing to Data Visualization.
3. Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and steaming data processing
4. Good experience in any one programming language -Scala/Python , Python preferred.
5. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins,Views etc
6. Experience in using Kafka or any other message brokers
7. Configuring, monitoring and scheduling of jobs using Oozie and/or Airflow
8. Processing streaming data directly from Kafka using Spark jobs, expereince in Spark- streaming is must
9. Should be able to handling different file formats (ORC, AVRO and Parquet) and unstructured data
10. Should have experience with any one No SQL databases like Amazon S3 etc
11. Should have worked on any of the Data warehouse tools like AWS Redshift or Snowflake or BigQuery etc
12. Work expereince on any one cloud AWS or GCP or Azure

Good to have skills:

1. Experience in AWS cloud services like EMR, S3, Redshift, EKS/ECS etc
2. Experience in GCP cloud services like Dataproc, Google storage etc
3. Experience in working with huge Big data clusters with millions of records
4. Experience in working with ELK stack, specially Elasticsearch
5. Experience in Hadoop MapReduce, Apache Flink, Kubernetes etc

What we offer

Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

About GlobalLogic

GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.

Apply Now

The gender information on this form helps us understand the makeup of our applicant pool in this key area, and to continuously improve our efforts to make our workforce more inclusive.

Drag and drop your file here or click here to upload

Only .docx, .rtf, .pdf formats allowed to a max size of 5 MB.

Alternately you can include your Linkedin profile