求人情報検索

皆様のキャリア形成をサポートします

769 件以上の募集職種

769 件以上の募集職種

Data Architect IRC224313

仕事No. IRC224313
勤務地 India - Bangalore
役職 Senior Consultant
経験 15+ years
職種 Engineering
スキル AWS, DB Architecture, ETL, Snowflake
Work Model: Hybrid

職種概要

12+yrs exp leading and implementing Data projects from Scratch
This role needs someone with software engineering experience, particularly in data-centric roles
A good candidate should be able to communicate well with stakeholders, accurately capture their needs, and architect software solutions that leverage established design patterns and adhere to SOLID principles.
A comprehensive understanding of software design and architecture, such as CPU utilization, I/O operations, and cloud-based services along with maintaining SLIs/SLOs/SLAs, is crucial.
The candidate should also have a solid grasp of backend engineering principles,
including proficiency with various API protocols (REST, GraphQL, gRPC), load balancing techniques, and reverse proxy configurations.
This knowledge will enable them to manage complex tasks such as event tracking, schema evolution, securing data ingestion processes, and developing scalable ETL/ELT pipelines.
Experience with handling retries for data sources without standard connectors, such as tools like Segment/Hightouch, is also important.
Additionally, the candidate should be adept at creating Directed Acyclic Graphs (DAGs) using workflow orchestration tools like Airflow, NiFi or Dagster.
After successful data ingestion, the candidate should be able to set up a data lake, ensuring raw data is duplicated and organized.
This requires a deep understanding of analytics engineering, data warehousing, and database modeling to support the data analysis needs of analysts and other stakeholders.
The candidate should be well-versed in managing Slowly Changing Dimensions, implementing star/snowflake schemas, denormalized datamarts and automating the processing of raw data into analytics-ready formats using tools like dbt.
Furthermore, it’s essential for the candidate to implement security measures such as column-level masking and row-level encryption to safeguard Personally Identifiable Information (PII).
Candidate should be proficient in building and managing advanced data pipelines but also experienced in data warehousing solutions & backend software engineering.
While expertise in dashboard creation using tools like Looker or Superset is advantageous, it is not a prerequisite for this role.

 

必要条件

12+yrs exp leading and implementing Data projects from Scratch
This role needs someone with software engineering experience, particularly in data-centric roles
A good candidate should be able to communicate well with stakeholders, accurately capture their needs, and architect software solutions that leverage established design patterns and adhere to SOLID principles.
A comprehensive understanding of software design and architecture, such as CPU utilization, I/O operations, and cloud-based services along with maintaining SLIs/SLOs/SLAs, is crucial.
The candidate should also have a solid grasp of backend engineering principles,
including proficiency with various API protocols (REST, GraphQL, gRPC), load balancing techniques, and reverse proxy configurations.
This knowledge will enable them to manage complex tasks such as event tracking, schema evolution, securing data ingestion processes, and developing scalable ETL/ELT pipelines.
Experience with handling retries for data sources without standard connectors, such as tools like Segment/Hightouch, is also important.
Additionally, the candidate should be adept at creating Directed Acyclic Graphs (DAGs) using workflow orchestration tools like Airflow, NiFi or Dagster.
After successful data ingestion, the candidate should be able to set up a data lake, ensuring raw data is duplicated and organized.
This requires a deep understanding of analytics engineering, data warehousing, and database modeling to support the data analysis needs of analysts and other stakeholders.
The candidate should be well-versed in managing Slowly Changing Dimensions, implementing star/snowflake schemas, denormalized datamarts and automating the processing of raw data into analytics-ready formats using tools like dbt.
Furthermore, it’s essential for the candidate to implement security measures such as column-level masking and row-level encryption to safeguard Personally Identifiable Information (PII).
Candidate should be proficient in building and managing advanced data pipelines but also experienced in data warehousing solutions & backend software engineering.
While expertise in dashboard creation using tools like Looker or Superset is advantageous, it is not a prerequisite for this role.

 


プリファレンス

12+yrs exp leading and implementing Data projects from Scratch
This role needs someone with software engineering experience, particularly in data-centric roles
A good candidate should be able to communicate well with stakeholders, accurately capture their needs, and architect software solutions that leverage established design patterns and adhere to SOLID principles.
A comprehensive understanding of software design and architecture, such as CPU utilization, I/O operations, and cloud-based services along with maintaining SLIs/SLOs/SLAs, is crucial.
The candidate should also have a solid grasp of backend engineering principles,
including proficiency with various API protocols (REST, GraphQL, gRPC), load balancing techniques, and reverse proxy configurations.
This knowledge will enable them to manage complex tasks such as event tracking, schema evolution, securing data ingestion processes, and developing scalable ETL/ELT pipelines.
Experience with handling retries for data sources without standard connectors, such as tools like Segment/Hightouch, is also important.
Additionally, the candidate should be adept at creating Directed Acyclic Graphs (DAGs) using workflow orchestration tools like Airflow, NiFi or Dagster.
After successful data ingestion, the candidate should be able to set up a data lake, ensuring raw data is duplicated and organized.
This requires a deep understanding of analytics engineering, data warehousing, and database modeling to support the data analysis needs of analysts and other stakeholders.
The candidate should be well-versed in managing Slowly Changing Dimensions, implementing star/snowflake schemas, denormalized datamarts and automating the processing of raw data into analytics-ready formats using tools like dbt.
Furthermore, it’s essential for the candidate to implement security measures such as column-level masking and row-level encryption to safeguard Personally Identifiable Information (PII).
Candidate should be proficient in building and managing advanced data pipelines but also experienced in data warehousing solutions & backend software engineering.
While expertise in dashboard creation using tools like Looker or Superset is advantageous, it is not a prerequisite for this role.


職務内容

12+yrs exp leading and implementing Data projects from Scratch
This role needs someone with software engineering experience, particularly in data-centric roles
A good candidate should be able to communicate well with stakeholders, accurately capture their needs, and architect software solutions that leverage established design patterns and adhere to SOLID principles.
A comprehensive understanding of software design and architecture, such as CPU utilization, I/O operations, and cloud-based services along with maintaining SLIs/SLOs/SLAs, is crucial.
The candidate should also have a solid grasp of backend engineering principles,
including proficiency with various API protocols (REST, GraphQL, gRPC), load balancing techniques, and reverse proxy configurations.
This knowledge will enable them to manage complex tasks such as event tracking, schema evolution, securing data ingestion processes, and developing scalable ETL/ELT pipelines.
Experience with handling retries for data sources without standard connectors, such as tools like Segment/Hightouch, is also important.
Additionally, the candidate should be adept at creating Directed Acyclic Graphs (DAGs) using workflow orchestration tools like Airflow, NiFi or Dagster.
After successful data ingestion, the candidate should be able to set up a data lake, ensuring raw data is duplicated and organized.
This requires a deep understanding of analytics engineering, data warehousing, and database modeling to support the data analysis needs of analysts and other stakeholders.
The candidate should be well-versed in managing Slowly Changing Dimensions, implementing star/snowflake schemas, denormalized datamarts and automating the processing of raw data into analytics-ready formats using tools like dbt.
Furthermore, it’s essential for the candidate to implement security measures such as column-level masking and row-level encryption to safeguard Personally Identifiable Information (PII).
Candidate should be proficient in building and managing advanced data pipelines but also experienced in data warehousing solutions & backend software engineering.
While expertise in dashboard creation using tools like Looker or Superset is advantageous, it is not a prerequisite for this role.

 


私たちが提供するもの

Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them.

Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities!

Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays.

Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings.

Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.

Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

GlobalLogicについて

GlobalLogic is a leader in digital engineering. We help brands across the globe design and build innovative products, platforms, and digital experiences for the modern world. By integrating experience design, complex engineering, and data expertise—we help our clients imagine what’s possible, and accelerate their transition into tomorrow’s digital businesses. Headquartered in Silicon Valley, GlobalLogic operates design studios and engineering centers around the world, extending our deep expertise to customers in the automotive, communications, financial services, healthcare and life sciences, manufacturing, media and entertainment, semiconductor, and technology industries. GlobalLogic is a Hitachi Group Company operating under Hitachi, Ltd. (TSE: 6501) which contributes to a sustainable society with a higher quality of life by driving innovation through data and technology as the Social Innovation Business.

今すぐ申し込む

The gender information on this form helps us understand the makeup of our applicant pool in this key area, and to continuously improve our efforts to make our workforce more inclusive.
ここにファイルを添付するか、またはブラウズしてください。
.docx, .rtf, .pdf形式のみ、最大5MBまで。
  • URL copied!