Responsibilities:
Take the customer’s outdated infrastructure, wrap it with automation and cast DevOps tools to make sure it is ready for the customer’s challenges.
Drive customers to choose the right platform/solutions whether it is RabbitMQ out of a RedHat, Elasticsearch, Kafka or whatever other technology will suit our customers needs.
Solutions proposed should assist customers in becoming more scalable, resilient, and cost effective.
Helping our customers to deliver great products by designing solid architecture and implementing it.
Stabilize cloud compute infrastructure.
Support customers in costs reduction.
Guide our customers to the world of DevOps by writing blogs, doing meetups, teaching at workshops, and more!
Requirements:
3+ years of experience as DevOps engineer – a must
Hands on experience with Linux Systems Administration/engineering in a large, distributed environment (Debian/Ubuntu/Centos/RHEL)
Must have hands-on experience with at least one of the leading Public Cloud Providers – AWS or GCP (Certifications – Advantage)
Hands-on experience in various DevOps tools: Git, Terraform, CloudFormation, Jenkins.
Experience with Git Version Control, working with git workflows.
Writing complex script automation (Bash, Python, Ruby, etc)
Production experience with configuration management (e.g. Ansible, Puppet, Chef)
Experience with Docker – Building images as part of CI/CD
Deep knowledge of Jenkins – Pipeline as code – scripted complex pipelines.
Ability to effectively operate with flexibility in a fast-paced and constantly evolving team
Ability to quickly learn, understand and work with new emerging technologies, methodologies and solutions in the Cloud space.
Excellent communication
Customer oriented & Problem-solving skills
Advantages:
Experience with ELK, Grafana, Prometheus, Sensu or Influx.
HashiCorp fan – Terraform (and Packer), Vault and Consul experience