Key Role:
Design and implement Big Data analytic solutions on a Hadoop–based platform. Create custom analytic and data mining algorithms to help extract knowledge and meaning from vast stores of data. Refine a data processing pipeline focused on unstructured and semi–structured data refinement. Support quick turn and rapid implementations and larger scale and longer duration analytic capability implementations.
Basic Qualifications:
-6+ years of experience with distributed scalable Big Data store or NoSQL, including Accumulo, Cloudbase, HBase, or Big Table
-Experience in MapReduce programming with Apache Hadoop and Hadoop Distributed File System (HDFS) and with processing large data stores
-Experience with the design and development of multiple object–oriented systems
-Experience with extending Free and Open–Source Software (FOSS) or COTS products
-Ability to show flexibility, initiative, and innovation when dealing with ambiguous and fast–paced situations
-Ability to obtain a security clearance
-BA or BS degree
Additional Qualifications:
-Experience with Apache Solr or Hadoop
-Experience with R or Python
-Experience with using repository management solutions
-Experience with deploying applications in a Cloud environment
-Experience with designing and developing automated analytic software, techniques, and algorithms
-MA or MS degree
Clearance:
Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.
We’re an EOE that empowers our people—no matter their race, color, religion, sex, gender identity, sexual orientation,
national origin, disability, or veteran status—to fearlessly drive change.
SIG2017