As a member of our data engineering team, you will help build and manage a next-generation cloud-based data platform. You will also be working collaboratively with our data scientists, analytics teams, and business product owners in a technology-driven organization and fast-paced environment:
- Responsible for the full lifecycle of data within the enterprise ecosystem.
- Design robust, scalable solutions and data pipelines to automate the ingestion, processing and delivery of all types of data: structured and unstructured, batch, and real-time streaming data.
- Evaluate, select, and implement new tools, frameworks and applications required to expand our platform capabilities.
- Understand and implement best practices in management of enterprise data, including master data, reference data, metadata, and data quality metrics.
What are we requiring for this role?
- Proficiency writing complex SQL, with experience in multiple database platforms, and with SQL-based databases and data warehouses
- Familiar with Python and experience with bash shell scripting
- Experience with data warehousing architecture and implementation, including hands on experience developing ETL (Informatica or SSIS, etc.)
- Outstanding interpersonal communication and written skills, with the ability to work in a team environment.
What do we prefer for this role?
- Familiarity with cloud-based data engineering (AWS, GCP, or Azure)
- Familiarity with Agile software development practices and working on a Scrum team
- Hands-on experience with source control and versioning strategies using Git
- Experience applying DevOps practices to data engineering, including: automated testing, and deployment of database changes using a continuous integration pipeline.
- Relevant technology or platform certification (AWS Certified, Microsoft Certified)
- BS in Computer Science/Computer Engineering, or related discipline, or equivalent work experience