Oferty pracy

DevOps Engineer (Big Data)

Region Kraków
Branża IT Services
Wynagrodzenie 800-1100 zł netto+VAT/dzień
Nr referencyjny DevOps/BIG/ES
Poleć » Znajdź inne oferty pracy »

We are looking for a DevOps Engineer with strong English language skills to join our international team of engineers and administrators at our rapidly expanding engineering office in Krakow. You need to be able to work autonomously under tight deadlines and quickly shifting priorities. If you’re passionate about BigData/Cloud computing, this is a great opportunity for you to leverage your skills in our emerging Hadoop/Hbase environment.

  • Support our BigData team in maintaining, monitoring, expanding and developing our open source Hadoop/HBase, Storm, Spark, Kafka, Elasticsearch environment.
  • Work closely with infrastructure, database, engineering and QA teams to insure high availability and performance according to agreed SLAs.
  • Monitoring, maintaining and troubleshooting live Big Data Environments for real time web access and asynchronous map reduce processing.
  • Use automated deployment and configuration of the servers farms (puppet).
  • Help in planning version upgrades without downtime (rolling upgrades).
  • Provide and manage cloud replication and data backups.
  • Review and analyse log files.
  • Align the environment to ensure BCP/disaster recovery processes.
  • Work with datacenter team on dedicated and cloud servers.
  • Relevant bachelor’s degree or equivalent combination of education/experience.
  • Good written and spoken English.
  • Experience with Debian or Debian base Linux distribution, assured handling of apt/dpkg.
  • Scripting expertise including BASH and PERL.
  • Knowledge of Java based web services, open source technologies.
  • Exposure to typical system administration such as storage capacity management, performance tuning and system dump analysis.
  • Experience on production grade deployments and troubleshooting.


  • Hands on experience with the Big Data stack: Hdfs, HBase, MapReduce, Pig, Avro, Storm, Spark, Kafka, Elasticsearch.
  • Open source Cloudera distribution of Hadoop/HBase (without using Cloudera Manager).
  • Experience with open source configuration management tools like Puppet.
  • Basic Java programming experience.
  • Hands on experience with opens source monitoring tools including Nagios and Ganglia.
  • Knowledge of virtualization and contenarization software like Docker.
  • Knowledge/Experience in metrics monitoring using Grafana, Graphite.
  • Support our BigData team in maintaining, monitoring, expanding and developing our open source Hadoop/HBase, Storm, Spark, Kafka, Elasticsearch environment.
  • Work closely with infrastructure, database, engineering and QA teams to insure high availability and performance according to agreed SLAs.
  • Monitoring, maintaining and troubleshooting live Big Data Environments for real time web access and asynchronous map reduce processing.
  • Use automated deployment and configuration of the servers farms (puppet).
  • Help in planning version upgrades without downtime (rolling upgrades).
  • Provide and manage cloud replication and data backups.
  • Review and analyse log files.
  • Align the environment to ensure BCP/disaster recovery processes.
  • Work with datacenter team on dedicated and cloud servers.
Osoby zainteresowane prosimy o przesyłanie aplikacji klikając w przycisk aplikowania, znajdujący się na górze strony.

Prosimy o umieszczenie w CV następującej klauzuli: „Wyrażam zgodę na przetwarzanie moich danych osobowych dla potrzeb niezbędnych do realizacji procesu rekrutacji (zgodnie z Ustawą z dnia 29.08.1997 roku o Ochronie Danych Osobowych; tekst jednolity: Dz.U.z 2014r., poz.1182 ze zm.).”

Bardzo dziękujemy za wszystkie aplikacje. Skontaktujemy się z wybranymi Kandydatami.