AE610C84 ~Rohit T – DevOps/Hadoop Admin

Resume posted by Glady in DevOps Engineer.
Desired Rate: $90.00/hr
Desired position type: C2C
Current Location: Raleigh North Carolina, United States

gcorreya@compunnel.com
Tel:
609-779-1361
Mobile:

Summary

PROFESSIONAL SUMMARY

– Innovative, results driven, hardworking and persistent individual with a passion to learn new technologies and apply them to solve real world problems.
– Over 8 years of experience in the IT industry in areas of DevOps, Software Configuration Management, Build and Release Engineering, LINUX/UNIX Administrator in various domains.
– Extensive experience in Hadoop Map Reduce programming, Spark, Scala, Pig, NoSQL and Hive.
– Experience with Cloudera Manager Administration – Installing, Updating Hadoop and its related components in Single node cluster as well as Multi node cluster environment using Apache, Cloudera.
– Good experience in UNIX/LINUX Administration along with SQL developer in designing and implementing Relational Database model as per business needs in different domains.
– Hands on experience on major components in Hadoop Ecosystem including HDFS and MR framework, YARN, HBase, Hive, Pig, Scoop, Zookeeper.
– Experience in understanding the security requirements for Hadoop and integrating with Kerberos authentication infrastructure- KDC server setup, creating realm /domain, managing.
– Experience on Commissioning, Decommissioning, Balancing, and Managing Nodes and tuning server for optimal performance of the cluster.
– As an admin involved in Cluster maintenance, trouble shooting, Monitoring and followed proper backup& Recovery strategies.
– Experience in understanding the security requirements for Hadoop and integrating with Kerberos authentication infrastructure- KDC server setup, creating realm /domain, managing.
– Experience in Application Deployments & Environment configuration using Ansible and Teamcity.
– In depth Knowledge of AWS cloud service like Compute, Network, Storage and Identity & access management.
– Experienced in implementing Organization DevOps strategy in various operating environments of Linux and windows servers.
– Experience in Maintaining and Execution of Build scripts to automate Development and Production builds.
– Maintaining and administering the Source code repositories, including implementation of automated controls and enhancements.
– Experience working with source control tools like GitHub & SVN, source control concepts like Branches, Merges & Tags.
– Experience using MAVEN & ANT as build tools for building of deployable artifacts from source code.
– Day-to-Day administration of Prod, Dev, Test Environments. 24×7 On-Call Support.

TECHNICAL SKILLS

Hadoop Framework

HDFS, Map Reduce, Hive, Hbase, Sqoop, Kafka, Oozie, Hue
Languages – SQL, PL/SQL, Shell
Database – Oracle Exadata, MSSQL
Monitoring Tools – Cloudera Manager

Cloud Platform

. Amazon Web Services (AWS)
Monitoring Tools
– Jenkins, Puppet, Chef, Ansible, Docker, VMware

Operating Systems: Kerberos
– Windows, Linux

Network Security
– Kerberos

Version Control
GIT, GITHub, GitLab

Education

University of Houston (Clearlake, Houston, TX) 2015
– M.S. Engineering Management. Osmania

University (India) 2009
– Bachelors

 

 

Experience

Sr. DevOps
Deutsche Bank, Cary, NC Jan 2018 – Present

– Worked with an agile development team to deliver an end-to-end continuous integration/continuous delivery product in an open source environment using Chef and Jenkins to get the job done.
– Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Cluster Planning, Manage and review data backups, Manage & review log files.
– Worked with the Data Science team to gather requirements for various data mining projects.
– Here I have installed 5 Hadoop clusters for different teams, we have developed a Data lake which serves as a Base layer to store and do analytics for Developers, we provide services to developers, install their custom software’s, upgrade Hadoop components, solve their issues, and help them troubleshooting their long running jobs, we are L3 and L4 support for the Data lake, and I also manage clusters for other teams.
– Involved in implementing security on Cloudera Hadoop Cluster using with Kerberos by working along with operations team to move non-secured cluster to secured cluster.
– Handled importing of data from various data sources, performed transformations using Hive, Map Reduce, Spark and loaded data into HDFS. Hadoop security setup using MIT Kerberos, AD integration (LDAP) and Sentry authorization.
– Management and Administration of AWS Services CLI, EC2, VPC, S3, ELB Glacier, Cloudtrail, IAM, and Trusted Advisor services.
– Created automated pipelines in AWS CodePipeline to deploy Docker containers in AWS ECS using services like CloudFormation, CodeBuild, CodeDeploy and S3.
– Cluster maintenance as well as creation and removal of nodes using Cloudera Manager Enterprise.
– Performance tuning of Hadoop clusters and Hadoop MapReduce routines, screen Hadoop cluster job performances and capacity planning and monitor Hadoop cluster connectivity and security
– Used the continuous Integration tools such as Jenkins and Hudson for automating the build processes.
– Conceived, Designed, Installed and Implemented CI/ CD automation system.
– Created and updated scripts and modules, files, and packages.
– Performed all necessary day-to-day GIT/TeamCity support for different projects and Responsible for design and maintenance of the GIT Repositories, views, and the access control strategies.
– Proposed branching strategies Created branches, performed merges for using Version Control Systems like GIT, GitHub.
– Created Teamcity pipeline to make the deployments automated using Kotlin.
– Worked in a Scrum Agile process with two-week iterations delivering new features and working software at each iteration.
– Scheduling snapshots of volumes for backup and find root cause analysis of failures and documenting bugs and fixes; scheduled downtimes and maintenance of cluster
– Collect the new technologies and tools and introduced them to the company in which way it helped the company build up an agile development environment. It improved the product quantity and the work efficiency.
– Provide direct server support during deployment and general production operations.

Environment: Cloudera Hadoop, GIT, JIRA, Teamcity, Shell Scripting, Jenkins, ansible, Sql, AWS, Linux and Windows.

Sr. Devops Engineer
Codeeva INC
Client: Deutsche Bank, Cary, NC Jun 2016 – Dec 2017

– Involved in start to end process of Hadoop cluster setup where in installation, configuration and monitoring the Hadoop Cluster.
– Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & review Hadoop log files.
– Monitoring systems and services, architecture design and implementation of Hadoop deployment, configuration management, backup, and disaster recovery systems and procedures.
– Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the job requirement.
– Experienced in defining ingestion job flows with Oozie and importing and exporting data into HDFS using Sqoop & Flume.
– Installation of various Hadoop Ecosystems and Hadoop Daemons.
– Designed and implemented fully automated server build management, monitoring and deployment solutions spanning multiple platforms, tools and technologies including teamcity server/Agent, SSH etc.
– Integrated GIT for automating builds with Teamcity.
– Managed the GIIT branching strategy for a few applications by creating Release branches, development branches thus ensuring the integrity of Trunk.
– Used Ant to perform daily and weekly Software Builds.
– Worked on Installation and configuration of DevOps tool Ansible and created various modules and manifests in Ansible to automate various applications.
– Implementing a Continuous Delivery framework using Teamcity, Ansible, Maven & Nexus in Linux environment
– Installed, administered and configured Teamcity for Continuous Integration Builds, automated deployments and Notifications.
– Automate administration tasks through the use of scripting and Job Scheduling using CRON.
– Manage the day-to-day operations of the cluster for backup and support.

Environment: Hadoop, git, Jira, Maven, ANT, Jenkins, Unix Shell Scripting, OpenStack, JBoss Application Servers, AWS

Software Engineer
Datamine Infotech, India Jan 2010 – July 2014

– Prepare specs for new infrastructure and VMware servers, disk storage, and network switches, routers, firewalls, and VPN’s.
– Administered Linux servers for several functions including managing Apache/Tomcat server, mail server, and MySQL databases in both development and production.
– Installed Redhat Linux using Kickstart and applied security patches for hardening the server based on the company’s policies.
– Created users, manage user Administered, maintained Red Hat 3.0, 4.0, 5.0, 6.0 AS, ES, Troubleshooting Hardware, Operating System Application & Network problems and performance issues; Deployed latest patches for, Linux and Application servers, Performed Red Hat Linux Kernel Tuning.
– Experience in implementing and configuring network services such as HTTP, DHCP, and TFTP.
– Create file transfer server for customer data exchange. Automate network permissions, maintain User & File System quota on Redhat Linux.
– Bash shell-scripts to automate routine activities, Monitored trouble ticket queue to attend user and system calls.
– Experienced working with Preload Assist and PICS projects. Migrated database applications from Windows 2000 Server to Linux server.
– Installing and setting up Oracle9i on Linux for the development team.
– Linux kernel, memory upgrades and swaps area. Red hat Linux Kickstart Installation.
– Capacity Planning, Infrastructure design and ordering systems.
– Successfully installed and configured NAGIOS monitoring system to monitor the production server environment.
– Attended team meetings, change control meetings to update installation progress, and for upcoming changes in environment.
– Updated data in inventory management package for Software and Hardware products.
– Worked with DBAs on installation of RDBMS database, restoration and log generation.

Environment: Red Hat Linux 3.0,4.0,5.0 AS ES, HP-DL585, Oracle 9i/10g, Samba, VMware Tomcat 3.x,4.x,5.x, Apache Server 1.x,2.x, Bash

Skills

  • Hadoop Framework
  • HDFS, Map Reduce, Hive, Hbase, Sqoop, Kafka, Oozie, Hue
  • Languages – SQL, PL/SQL, Shell
  • Database – Oracle Exadata, MSSQL
  • Monitoring Tools – Cloudera Manager
  • Cloud Platform
  • . Amazon Web Services (AWS)
  • Monitoring Tools
  • – Jenkins, Puppet, Chef, Ansible, Docker, VMware
  • Operating Systems: Kerberos
  • – Windows, Linux
  • Network Security
  • – Kerberos
  • Version Control
  • GIT, GITHub, GitLab

Specialties

    Ansible, Chef, Docker, github, Gitlab, Hadoop Framework HDFS, Hbase, HIVE, Hue Languages - SQL, Kafka, Linux Network Security - Kerberos Version Control GIT, Map Reduce, MSSQL Monitoring Tools - Cloudera Manager Cloud Platform . Amazon Web Services (AWS) Monitoring Tools - Jenkins, Oozie, PL/SQL, Puppet, Shell Database - Oracle Exadata, Sqoop, VMware Operating Systems: Kerberos - Windows

Groups & Associations

    H1B

35 total views, 1 today