Job Search
Mid+/Senior Data Engineer IRC162711
Job: | IRC162711 |
Location: | Poland |
Designation: | Senior Software Engineer |
Experience: | 3-5 years |
Function: | Engineering |
Skills: | AWS, Database, Hadoop, Java, Python, Scala, Spark, SQL |
Remote | Yes |
Description:
We are looking for a Mid/Senior Data Engineer for the financial project, providing asset management for institutions to help them select their investment strategies. In this position, you will be part of our “Business Intelligence and Reporting Team”. The successful candidate will be responsible for the planning, design, implementation and maintenance of data driven solutions in AWS along with SaaS. It is a great chance to become a part of a distributed, cross-functional team based in Poland. If you are looking for stability, its something for You! This project is part of an undertaking planned for several years.
Don’t stand back, when the IT world moves forward! Apply, and our awesome Recruitment team will introduce you to this big adventure!
Requirements:
- 3+ years of working experience with Hadoop
- Strong knowledge of any programming language (ex: Java, Scala, or any OOP language)
- Solid knowledge and experience of Python & Spark and SQL
- Knowledge and experience of writing ETLs in Scala
- Database fundamentals
- Experience in working on AWS (EC2, EMR. S3)
- Good understanding of SDLC life-cycle (Dev, test, pre-prod, production, and the CI/CD pipeline)
- Agile technologies: Git, Gradle/Maven, Jenkins, TDD
- Knowledge of tools – BitBucket, OpenVPN, Artifactory, JIRA, Confluence – will be a strong plus
Job Responsibilities:
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud-based ‘big data’ technologies from AWS
- Create tool-chains for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
- The Data Engineer will support our software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data and models devOps (dataOps) architecture is consistent throughout ongoing projects
- Create and maintain optimal data and model data Ops pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
What We Offer
Exciting Projects: With clients across all industries and sectors, we offer an opportunity to work on market-defining products using the latest technologies.
Collaborative Environment: You can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities!
Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules.
Professional Development: We develop paths suited to your individual talents through international knowledge exchanges and professional certification opportunities.
Excellent Benefits: We provide our employees with private medical care, sports facilities cards, group life insurance, travel insurance, relocation package, food subsidies and cultural activities.
Fun Perks: We want you to feel comfortable in your work, which is why we create good working environment with relax zones, host social and teambuilding activities and stock our kitchen with delicious teas and coffees!