Big Data Engineer, Mid.

Key Role:

Create and implement big data ingest, storage, Extract, Transform, and Load (ETL), and workflow solutions. Utilize relational database or NoSQL data designs. Code in multiple computer languages and comprehend new technology. Work closely with Architects, Developers, Administrators and Business Analysts to dissect technical designs and business requirements. Create and implement an effective and desirable solution.

Basic Qualifications:

-2+ years of experience with a computer language including Java, Python, Scala, C, C++, or C#

-1+ year of experience in data warehouse design, including snowflake and star schema or a big data store design, including Datalake

-1+ year of experience with RDBMS ETL processes using PL/SQL, T-SQL, or SQL

-1+ year of experience working on Microsoft SQL Server, Oracle, MySQL, PostgreSQL, SQLite, or AWS MariaDB

-Ability to obtain a security clearance

-BA or BS degree

Additional Qualifications:

-Knowledge of cloud platforms including AWS and Azure

-Knowledge of Kubernetes containerization either with native or OpenShift

-Knowledge of docker containerized development

-Knowledge of data ingest or real-time messaging using Kafka, Spark, Hive, NiFi, or Kinesis Firehose

-Ability to learn big data technologies and big data design patterns

-Ability to learn the hadoop technology stack and services in the data access layer 

Clearance:

Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.

We’re an EOE that empowers our people—no matter their race, color, religion, sex, gender identity, sexual orientation, national origin, disability, veteran status, or other protected characteristic—to fearlessly drive change.

#LI-AH1

Not ready to apply? Join our talent community and sign up for job alerts.