Design and implement efficient large-scale data solutions in areas data ingestion, data curation, data quality, data visualization and data distribution using Informatica, Snaplogic, Tableau and MicroStrategy. Manage and query data in data lakes and/or cloud data repositories Google BigQuery and AWS Redshift. Responsible for writing well structured, testable PySpark code to implement use cases using Machine Learning techniques in a production grade environment. Design and develop UNIX shell scripts as part of the ETL process to automate the process of loading and pulling the data. Create test plans, test cases, test scripts and perform data testing and validation. Will work in Charlotte, NC and/or various client sites throughout the U.S. Must be willing to travel and/or relocate.