Headquartered in New York City, Dataiku was founded in Paris in 2013 and achieved unicorn status in 2019. Now, more than 1,000+ employees work across the globe in our offices and remotely. Backed by a renowned set of investors and partners including CapitalG, Tiger Global, and ICONIQ Growth, we’ve set out to build the future of AI.
Dataiku is looking for an experienced Cloud Architect to join its Field Engineering Team to support the deployment of its Everyday AI Platform, Dataiku, to an ever growing customer base.
As a Cloud Architect, you’ll work with customers at every stage of their relationship with Dataiku - from the initial evaluations to enterprise-wide deployments. In this role, you will help customers to design, build, validate, and run their Data Science and AI Everyday Platforms.
This role requires strong technical abilities, adaptability, inventiveness, and strong communication skills. Sometimes you will work with clients on traditional big data technologies such as SQL data warehouses and on-premise Hadoop data lakes, while at other times you will be helping them to discover and implement the most cutting edge tools; Spark on Kubernetes, cloud-based elastic compute engines, and GPUs. If you are interested in staying at the bleeding edge of big data and AI while maintaining a strong working knowledge of existing enterprise systems, this will be a great fit for you.
The position is open to be remote anywhere in the US, or at one of our offices (Denver, NYC).
- Evangelize the challenges of building Enterprise Data Science Platforms to technical and non-technical audiences
- Understand customer requirements in terms of scalability, availability and security and provide architecture recommendations
- Deploy Dataiku in a large variety of technical environments (on prem/cloud, hadoop, kubernetes, spark, …)
- Design and build reference architectures, howtos, scripts and various helpers to make the deployment and maintenance of Dataiku smooth and easy
- Automate operation, installation, and monitoring of the data science ecosystem components in our infrastructure stack
- Provide advanced support for strategic customers on deployment and scalability issues
- Advance integrations with kubernetes and other infrastructure by contributing code based on customer demand and requirements
- Coordinate with Revenue and Customer teams to deliver a consistent experience to our customers
- Train our clients and partners in the art and science of administering a bleeding-edge Elastic AI platform
- Drive technical success by being a trusted advisor to our customers and our internal account teams to provide the best recommendations and advance customer accounts
- Troubleshoot complex customer issues when necessary
- Strong Linux system administration experience
- Hands-on experience with the Kubernetes ecosystem for setup, administration, troubleshooting and tuning
- Experience with cloud based services like AWS, GCP , and Azure (preferred)
- Grit when faced with technical issues
- Tendency to not rest until you understand why it does not work
- Comfort and confidence in client-facing interactions
- Ability to work both pre and post sale
- Hands-on experience with the Hadoop and/or Spark ecosystem for setup, administration, troubleshooting and tuning
- Some experience with Python
- Familiarity with Ansible or other application deployment tools (Terraform, CloudFormation, etc)
- Experience with authentication and authorization systems like LDAP, Kerberos, AD, and IAM
- Experience debugging networking issues such as DNS resolutions, proxy settings, and security groups
- Some knowledge in data science and/or machine learning
- Some knowledge of Java
The expected base salary range for this role is $130,000.00 to $160,000.00, although the range may vary depending on the location of the candidate. The actual offer, reflecting the final base salary for the position, will be determined during our interview process by assessing a variety of factors, including a candidate’s experience and skills.