Principal Big Data Engineer
- Job Title
- Principal Big Data Engineer
- Job ID
- Cincinnati, OH 45202
- Other Location
Title: Principal Big Data Engineer
Location: Cincinnati, OH
Conexess Group is a staffing company that specializes in finding the right talent for our clients and connecting people with new opportunities.
Our client is a technology-focused leader in the financial space, and they are seeking a highly experienced Big Data Engineer to join their Big Data Engineering and Data Architecture team. The Data Engineer designs and builds platforms, tools, and solutions that help the organization manage, secure, and generate value from its data. The person in this role creates scalable and reusable solutions for gathering, collecting, storing, processing, and serving data on both small and very large scales. These solutions can include on-premise and cloud-based data platforms and solutions in any of the following domains – ETL, business intelligence, analytics, persistence (relational, NoSQL, data lakes), search, messaging, data warehousing, stream processing, and machine learning.
As a member of the Big Data Engineering and Data Architecture team, this person will specialize in Big Data technologies and solutions, helping to build and support the company’s Data Lake and Streaming Data Platforms. This team will also help to shape the strategy related to traditional database and data warehousing technologies, ETL/ELT platforms, and business intelligence tooling.
- Very competitive compensation + annual bonus
- Excellent benefits + 401K matching
- Technology-focused organization
- Relocation assistance provided (if needed)
- Responsible for design, development, and support of Big Data solutions, APIs, tools, and processes to enable rapid delivery of business capabilities.
- Work closely with IT application teams, Enterprise Architecture, Infrastructure, Information Security, and LOB stakeholders to translate business and technical strategies into data-driven solutions for the organization.
- Act as a technical expert addressing problems related to system and application design, performance, integration, security, etc.
- Conduct research and development based on current trends and technologies related to the financial industry, data engineering and architecture, data security, and related topics.
- Work with developers to build CI/CD pipelines, self-service build tools, and automated deployment processes.
- Evaluate software products and provide documented recommendations as needed.
- Provide support and troubleshooting for Big Data platforms.
- Participate in the planning process for hardware and software.
- Plan and work on internal projects as needed, including legacy system replacement, monitoring and analytics improvements, tool development, and technical documentation.
- Provide technical guidance and mentoring for other team members.
REQUIRED SKILLS & QUALIFICATIONS
- Legally authorized for work for US employer without sponsorship
- At least 8 years of experience in hands-on software development roles with significant experience developing and supporting Java applications (at least 6 years)
- Significant experience with two or more major RDBMS products
- Significant experience working with and supporting Unix/Linux and Windows systems
- Proficient in relational database modeling concepts and techniques
- Strong knowledge of application and data security concepts, best practices, and common vulnerabilities
- Experience with Big Data technologies - strongly preferred (e.g. Hadoop, Hive, HBase, Spark, ElasticSearch/Solr, Kafka, etc.).
- Experience with any of the following is preferred, but not required – IBM BigInsights (or other major Hadoop distribution), metadata management products, commercial or open source ETL tools (esp. IBM InfoSphere Information Server / DataStage or Talend), BI and Data Science tools, messaging systems (esp. Kafka/Confluent, MQ), machine learning toolkits, data warehousing, Spring Framework, Python, Scala, R, version control systems (esp. ClearCase, Git), continuous integration/delivery, infrastructure automation and virtualization (esp. Docker, Chef, Puppet, Ansible), major cloud providers (esp. Amazon AWS and IBM BlueMix), or REST API design and development.