Soluciones tecnológicas
Soluciones tecnológicasLas instituciones financieras actuales se ven obligadas a avanzar más rápido, escalar d...

Durante los últimos meses, en conversaciones con empresas de los sectores bancario, de ...

Los Premios a la Excelencia Brandon Hall Group 2025™ se otorgan a trabajos destacados e...

GlobalLogic Inc., empresa del Grupo Hitachi y líder en ingeniería digital, ha anunciado...

Consultant
Engineering
5-10 years
India - Hyderabad, Pune
AI, Machine Learning, Python, R, SQL
Hybrid
Job Description
About the Role
We are seeking a highly skilled Data Scientist to design, develop, and deploy AI/ML models that power data mapping, anomaly detection, and reconciliation automation within large-scale projects. This role combines strong data science expertise with practical engineering skills to build intelligent systems that improve Telecom Platform, reduce manual effort, and ensure high-quality outcomes.
The ideal candidate has hands-on experience with machine learning model development, data engineering, and cloud-based AI/ML workflows, along with exposure to the telecom domain.
Key Responsibilities
AI/ML Model Development
Design, develop, and deploy ML models for automated data mapping, anomaly detection, reconciliation, fraud detection, and churn prediction.
Conduct data profiling, feature engineering, and exploratory analysis to improve accuracy and performance.
Select appropriate algorithms (supervised, unsupervised, reinforcement learning) based on business needs.
Automation & Integration
Build end-to-end ML pipelines for data ingestion, preprocessing, training, validation, and deployment.
Integrate models into Telecom Platform and Automation frameworks, enabling seamless execution in production.
Monitor model performance, implement retraining strategies, and optimize for scalability and reliability.
Collaboration & Delivery
Work with cross-functional teams (engineering, architecture, QA, business SMEs) to align AI solutions with Telecom Platform and enterprise requirements.
Translate business requirements into clear technical specifications, user stories, and acceptance criteria.
Contribute to platform innovation by adopting latest AI/ML advancements in anomaly detection and reconciliation automation.
Knowledge Sharing & Mentorship
Document AI models, frameworks, and best practices for reusability.
Mentor junior engineers/data scientists, fostering a collaborative and learning-oriented environment.
What You Bring
Experience:
5 to 8 years of experience in developing and deploying machine learning models in production.
Hands-on experience in Classification, anomaly detection, or reconciliation automation is highly preferred.
Proven track record of delivering AI/ML projects from ideation to production deployment..
Technical Skills:
Strong knowledge of ML algorithms and techniques (supervised, unsupervised, anomaly detection, NLP, and deep learning).
Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch).
Experience with data pipelines, ETL/ELT, Delta Lake, and data lakehouse architectures.
Cloud-based ML experience (Azure Data Factory, Azure Databricks, AWS Sagemaker, GCP AI/ML).
Skilled in PySpark for large-scale data processing.
Familiarity with containerization (Docker, Kubernetes) for scalable deployment.
Strong grounding in data reconciliation frameworks and automation techniques.
Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes).
Building & deploying model using Python Azure Data Factory (ADF), Azure Databricks, PySpark, Delta Lake, ETL/ELT, data pipelines, data lakehouse architecture.
Excellent problem-solving and analytical skills.
Communication and collaboration skills.
Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field.
Strong understanding of machine learning algorithms and techniques, including supervised, unsupervised, and reinforcement learning.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
About the Role
We are seeking a highly skilled Data Scientist to design, develop, and deploy AI/ML models that power data mapping, anomaly detection, and reconciliation automation within large-scale projects. This role combines strong data science expertise with practical engineering skills to build intelligent systems that improve Telecom Platform, reduce manual effort, and ensure high-quality outcomes.
The ideal candidate has hands-on experience with machine learning model development, data engineering, and cloud-based AI/ML workflows, along with exposure to the telecom domain.
Key Responsibilities
AI/ML Model Development
Design, develop, and deploy ML models for automated data mapping, anomaly detection, reconciliation, fraud detection, and churn prediction.
Conduct data profiling, feature engineering, and exploratory analysis to improve accuracy and performance.
Select appropriate algorithms (supervised, unsupervised, reinforcement learning) based on business needs.
Automation & Integration
Build end-to-end ML pipelines for data ingestion, preprocessing, training, validation, and deployment.
Integrate models into Telecom Platform and Automation frameworks, enabling seamless execution in production.
Monitor model performance, implement retraining strategies, and optimize for scalability and reliability.
Collaboration & Delivery
Work with cross-functional teams (engineering, architecture, QA, business SMEs) to align AI solutions with Telecom Platform and enterprise requirements.
Translate business requirements into clear technical specifications, user stories, and acceptance criteria.
Contribute to platform innovation by adopting latest AI/ML advancements in anomaly detection and reconciliation automation.
Knowledge Sharing & Mentorship
Document AI models, frameworks, and best practices for reusability.
Mentor junior engineers/data scientists, fostering a collaborative and learning-oriented environment.
What You Bring
Experience:
5 to 8 years of experience in developing and deploying machine learning models in production.
Hands-on experience in Classification, anomaly detection, or reconciliation automation is highly preferred.
Proven track record of delivering AI/ML projects from ideation to production deployment..
Technical Skills:
Strong knowledge of ML algorithms and techniques (supervised, unsupervised, anomaly detection, NLP, and deep learning).
Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch).
Experience with data pipelines, ETL/ELT, Delta Lake, and data lakehouse architectures.
Cloud-based ML experience (Azure Data Factory, Azure Databricks, AWS Sagemaker, GCP AI/ML).
Skilled in PySpark for large-scale data processing.
Familiarity with containerization (Docker, Kubernetes) for scalable deployment.
Strong grounding in data reconciliation frameworks and automation techniques.
Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes).
Building & deploying model using Python Azure Data Factory (ADF), Azure Databricks, PySpark, Delta Lake, ETL/ELT, data pipelines, data lakehouse architecture.
Excellent problem-solving and analytical skills.
Communication and collaboration skills.
Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field.
Strong understanding of machine learning algorithms and techniques, including supervised, unsupervised, and reinforcement learning.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
Soft Skills:
Strong problem-solving and critical thinking ability.
Ability to simplify complex AI/ML concepts for both technical and business stakeholders.
Comfortable working in fast-paced Agile environments with shifting priorities.
Collaborative, proactive, and outcome-oriented mindset.
Qualifications:
Bachelor’s degree in Computer Science or related field from IIT.
Master’s degree or equivalent advanced degree preferred.
Proven track record of delivering data science projects from ideation to production.
Strong communication skills and the ability to tell compelling stories with data.
Comfortable with both structured and unstructured data sets.
Certifications in AI/ML, cloud platforms, or data science frameworks are a plus.
About the Role
We are seeking a highly skilled Data Scientist to design, develop, and deploy AI/ML models that power data mapping, anomaly detection, and reconciliation automation within large-scale projects. This role combines strong data science expertise with practical engineering skills to build intelligent systems that improve Telecom Platform, reduce manual effort, and ensure high-quality outcomes.
The ideal candidate has hands-on experience with machine learning model development, data engineering, and cloud-based AI/ML workflows, along with exposure to the telecom domain.
Key Responsibilities
AI/ML Model Development
Design, develop, and deploy ML models for automated data mapping, anomaly detection, reconciliation, fraud detection, and churn prediction.
Conduct data profiling, feature engineering, and exploratory analysis to improve accuracy and performance.
Select appropriate algorithms (supervised, unsupervised, reinforcement learning) based on business needs.
Automation & Integration
Build end-to-end ML pipelines for data ingestion, preprocessing, training, validation, and deployment.
Integrate models into Telecom Platform and Automation frameworks, enabling seamless execution in production.
Monitor model performance, implement retraining strategies, and optimize for scalability and reliability.
Collaboration & Delivery
Work with cross-functional teams (engineering, architecture, QA, business SMEs) to align AI solutions with Telecom Platform and enterprise requirements.
Translate business requirements into clear technical specifications, user stories, and acceptance criteria.
Contribute to platform innovation by adopting latest AI/ML advancements in anomaly detection and reconciliation automation.
Knowledge Sharing & Mentorship
Document AI models, frameworks, and best practices for reusability.
Mentor junior engineers/data scientists, fostering a collaborative and learning-oriented environment.
What You Bring
Experience:
3 to 5 years of experience in developing and deploying machine learning models in production.
Hands-on experience in Classification, anomaly detection, or reconciliation automation is highly preferred.
Proven track record of delivering AI/ML projects from ideation to production deployment..
Technical Skills:
Strong knowledge of ML algorithms and techniques (supervised, unsupervised, anomaly detection, NLP, and deep learning).
Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch).
Experience with data pipelines, ETL/ELT, Delta Lake, and data lakehouse architectures.
Cloud-based ML experience (Azure Data Factory, Azure Databricks, AWS Sagemaker, GCP AI/ML).
Skilled in PySpark for large-scale data processing.
Familiarity with containerization (Docker, Kubernetes) for scalable deployment.
Strong grounding in data reconciliation frameworks and automation techniques.
Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes).
Building & deploying model using Python Azure Data Factory (ADF), Azure Databricks, PySpark, Delta Lake, ETL/ELT, data pipelines, data lakehouse architecture.
Excellent problem-solving and analytical skills.
Communication and collaboration skills.
Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field.
Strong understanding of machine learning algorithms and techniques, including supervised, unsupervised, and reinforcement learning.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
Soft Skills:
Strong problem-solving and critical thinking ability.
Ability to simplify complex AI/ML concepts for both technical and business stakeholders.
Comfortable working in fast-paced Agile environments with shifting priorities.
Collaborative, proactive, and outcome-oriented mindset.
Qualifications:
Bachelor’s degree in Computer Science or related field.
Master’s degree or equivalent advanced degree preferred.
Proven track record of delivering data science projects from ideation to production.
Strong communication skills and the ability to tell compelling stories with data.
Comfortable with both structured and unstructured data sets.
Certifications in AI/ML, cloud platforms, or data science frameworks are a plus.
Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders.
Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.
Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.
Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!
High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.
GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Hi there — how can I assist you today?
Explore our services, industries, career opportunities, and more.