Manage and optimize Cloudera Data Platform for Big Data operations, provide technical assistance, and collaborate with teams on features and performance tuning.
Description and Requirements
Position Summary:
A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred.
Job Responsibilities:
Education:
Bachelor's degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience.
Experience:
14+ Years Total IT experience & 10+ Years relevant experience in Big Data database
Technical Skills:
Other Critical Requirements:
About MetLife
Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World's 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world's leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East.
Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we're inspired to transform the next century in financial services.
At MetLife, it's #AllTogetherPossible . Join us!
#BI-Hybrid
Position Summary:
A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred.
Job Responsibilities:
- Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters.
- Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance.
- Identifies and resolves issues utilizing structured tools and techniques.
- Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance.
- Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards.
- Implement industry best practices while performing Hadoop cluster administration tasks.
- Works in an Agile model with a strong understanding of Agile concepts.
- Collaborates with development teams to provide and implement new features.
- Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic.
- Address organizational obstacles to enhance processes and workflows.
- Adopts and learns new technologies based on demand and supports team members by coaching and assisting.
Education:
Bachelor's degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience.
Experience:
14+ Years Total IT experience & 10+ Years relevant experience in Big Data database
Technical Skills:
- Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL .
- Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos.
- Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency.
- Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity.
- Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams.
- DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic.
- Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams.
- ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow.
Other Critical Requirements:
- Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency.
- Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment.
- 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability.
- Team Management and Leadership : Proven experience managing geographically distributed and culturally diverse teams, with strong leadership, coaching, and mentoring skills.
- Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels.
- Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams.
- Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations.
- Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting.
About MetLife
Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World's 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world's leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East.
Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we're inspired to transform the next century in financial services.
At MetLife, it's #AllTogetherPossible . Join us!
#BI-Hybrid
Top Skills
Ansible
Apache Hadoop
Big Data
Cloudera Data Platform
Cloudera Flow Management
Elastic
Hadoop
Hbase
Hive
Ibm Bigsql
Janusgraph
Kafka
Linux
Nifi
Python
Ranger
Servicenow
Solr
Spark
Similar Jobs at MetLife
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
The Lead Cloud Engineer will oversee Azure infrastructure, develop Terraform modules, ensure security compliance, and optimize cloud resources.
Top Skills:
AzureAzure DevopsTerraform
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
The Team Leader will design and implement application platforms, support developers, and lead automation efforts in integration solutions.
Top Skills:
AnsibleApache TomcatAppdDocumakerElasticJavaJSONKibanaLibertyLinuxOpenshiftPingPythonQuadientShell ScriptingSiteminderWebsphere Application ServerWindowsXpressionYaml
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
The Assistant Manager - Technology Services will develop and manage applications using Java fullstack, Azure technologies, React, and Typescript.
Top Skills:
AzureJavaReactTypescript
What you need to know about the Colorado Tech Scene
With a business-friendly climate and research universities like CU Boulder and Colorado State, Colorado has made a name for itself as a startup ecosystem. The state boasts a skilled workforce and high quality of life thanks to its affordable housing, vibrant cultural scene and unparalleled opportunities for outdoor recreation. Colorado is also home to the National Renewable Energy Laboratory, helping cement its status as a hub for renewable energy innovation.
Key Facts About Colorado Tech
- Number of Tech Workers: 260,000; 8.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Lockheed Martin, Century Link, Comcast, BAE Systems, Level 3
- Key Industries: Software, artificial intelligence, aerospace, e-commerce, fintech, healthtech
- Funding Landscape: $4.9 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Access Venture Partners, Ridgeline Ventures, Techstars, Blackhorn Ventures
- Research Centers and Universities: Colorado School of Mines, University of Colorado Boulder, University of Denver, Colorado State University, Mesa Laboratory, Space Science Institute, National Center for Atmospheric Research, National Renewable Energy Laboratory, Gottlieb Institute