Oferta pracy id: 70109 z dnia 2019-02-19

LINUX/HADOOP ENGINEER


About us
Capgemini is one of leading global companies offering consulting, IT technologies.
The Cloud is fashionable - everyone’s talking about it, many use it, but few knows what it consists of, how it works, how to access it, and how to take care of it. It is us, Cloud Infrastructure Services, who understand the subject thoroughly. From high level services, through managing equipment and operating systems, internal or access networks to managing applications, IT operations, availability, configurations, and changes. By working in an international environment… we use a number of foreign languages.
Candidate’s profile
Essential:
  • Must have a very good Unix/Linux experience (preferably Red Hat)
  • Good troubleshooting skills 
  • Network Know-how (DNS/TCP/IP)
  • Basic knowledge of virtualization (VMware)
  • Experience with application support
  • Basic knowledge of Active Directory.
  • A great sense of humor

Nice to have:
  • Being familiar with Hadoop Ecosystem
  • Basic knowledge of Cloud services (AWS, Azure)
  • Experience with scripting would be appreciated ( bash, python , PowerShell )
  • Good interpersonal skills

An asset would be:
  • Readiness for on call duty
Job description
We are looking for a professional experienced with Linux support who wants to gain experience with some of the latest technologies.
You will provide support for operating systems (Linux / Windows) and get trained on Big Data applications support.
You will have the chance to collaborate with data scientists and architects on a daily basis.
You should be motivated, have a “can do” attitude and be willing to keep on developing your skills (no routine).
 Main accountabilities:
  • Perform day-to-day Hadoop cluster activities.
  • Install, deploy and maintain Hadoop cluster.
  • Provide Hadoop cluster performance analysis and provide troubleshooting support.
  • Monitor Hadoop cluster connectivity and security.
  • Automation of manual tasks for better performance.
  • User access management.
  • Managing backup and recovery solutions of platform and databases.
  • Cooperating with other teams including external suppliers.
  • Develop and maintain existing documentation
Your team
Big Data Lake as a Service Team is a dynamic environment with great variety of services and tools.Once you are part of the Infrastructure team, you will have an opportunity to manage infrastructure components based on DELL EMC VxBLOCK, automate all operational tasks with vRealize Automation and use rest of the time to learn new technologies using our internal learning platform.If you prefer working directly with Hadoop clusters, want to manage environment on both app and OS level, have good troubleshooting skills and willingness to explore new technologies - apply to Application team.Join us and be part of the great team that shapes the future of Big Data Services!
What we offer
  • Working in a close-knit team and a friendly atmosphere
  • Development of expert or leader competences
  • Bonuses, including those for recommending new employees
  • A wide range of training and co-financing of courses
  • Additional life insurance
  • Attractive package of additional benefits (fitness, gym, cinema, etc.) you chose what you want
  • Integration events and joint celebrations
  • Employee volunteering opportunities and interesting CSR projects
  • Disability inclusion, assistive technologies, reasonable accommodations
  • Private medical care, also for your family
  • Bicycle parking and carpooling
Data dodania: 2019-02-19
Oferta ważna do: 2019-03-19
Branże: Administracja Biurowa; IT - Administracja;
Wymagane doświadczenie: Kilkuletnie (do 3 lat włącznie)
Wymagane wykształcenie: Kierunkowe I stopnia
Forma zatrudnienia: Pełny etat
Poziom stanowiska: Specjalista
Wynagrodzenie: 500 - 30000 zł brutto
Możliwość pracy zdalnej: NIE