Job Duties: Design and develop database and application solution strategy, technical design, architecture, and support for carrying out the implementation of enterprise database and data warehouse development. Build and maintain code to populate HDFS, Hadoop with log events from real-time data feeds or data loaded from SQL production systems. Responsible for assessing the impact of external production system changes to Big Data systems on Spark to implement changes to the ETL to ensure consistent and accurate data flows. Build hive scripts to ingest millions of records and responsible for performance tuning of these scripts. Use Sqoop to populate the data from RDBMS to HDFS and develop Hive/pig scripts to perform analysis. Create, develop and support Business Intelligence initiatives using Informatica PowerCenter. Deliver advanced and complex reporting solutions including Dashboards, Standardized Reporting using OBIEE. Perform integration testing, production deployment through GitHub. Write stored procedures, packages, functions, and views in PL/SQL. Will work in Charlotte, NC and/or various client sites throughout the U.S. Must be willing to travel and/or relocate.