Senior Software Engineer Big Data & Analytics
Summary
Sling TV L.L.C. provides an over-the-top (internet delivered) television experience on TVs, tablets, gaming consoles, computers, smartphones, smart TVs and other streaming devices. Distributed across a variety of strategic device partners, including Google, Amazon, Apple TV, Microsoft, Roku, Samsung, LG, Comcast, and many others, Sling TV offers two primary domestic streaming services that collectively include more than 100 channels of top content. Featured programmers include Disney/ESPN, NBC, AMC, A&E, EPIX, NFL Network, NBA TV, NHL Networks, Pac-12 Networks, Hallmark, Viacom, and more. For Spanish-speaking customers, Sling Latino offers a suite of standalone and extra Spanish-programming packages tailored to the U S. Hispanic market. And for those seeking International content, Sling International currently provides more than 300 channels in 20 languages (available across multiple devices) to U.S. households.
Sling TV is the #1 Live TV Streaming Service Sling TV is a next-generation service that meets the entertainment needs of today’s contemporary viewers. Visit www.Sling.com. We are driven by curiosity, pride, adventure, and a desire to win – it’s in our DNA. We’re looking for people with boundless energy, intelligence, and an overwhelming need to achieve to join our team as we embark on the next chapter of our story.
Opportunity is here. We are Sling.
Job Duties and Responsibilities
About the position
The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
Our environment is…
- Complex
- Highly elastic
- Based on some of the latest and greatest cloud native technologies
- Very fast paced
Your team will be…
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Work with DataScientists to build pipelines for the Data models
In order to be successful in this role, you will need to be…
- Highly motivated, driven & hard working
- Not afraid to fail and comfortable working independently and with a team
- Comfortable working with massive datasets in real time and batch processing with superior analytics skills
- Comfortable talking to and working with Senior Executives
- Apply data mining techniques, statistical analysis, and build high quality prediction systems integrated with our product. Doing ad-hoc analysis and presenting results in a clear manner.
- Processing, cleansing, and verifying the integrity of data used for analysis
- Enhancing data collection procedures to include information that is relevant for building analytic systems
- A team player. We have a great group of diverse folks working together in harmony. Big egos and “super heroes” need not apply.
Skills - Experience and Requirements
You would be considered a great fit for this role if you have the following:
Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, Kafka Streams, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Experience with GoLang to build micro services
- Experience with Docker and Kubernetes
- Experience with Scrum/Agile development methodologies and also strong project management and organizational skills.
- Successful track record of developing quality software products and shipping production ready software
- 8 to 10 years of software development experience in a professional business environment
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘Big Data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience supporting and working with cross-functional teams in a dynamic environment.
These qualifications would make you stand out among other applicants:
- Great communication skills - someone who is passionate about evangelizing the value of
advanced data science capabilities. - Experience working with AWS - Sagemaker, Athena, S3, and RedShift.
- Master’s Degree in Engineering or a related field