$80 - $86.50/hour
We have created a new Big Data Platforms group within Direct-To-Consumer and International (DTCI) technology with the skills, drive, and passion to innovate, create, and succeed in enabling a Direct to Consumer Strategy for ESPN, Disney and ABC products.
We are here to disrupt and start a cultural revolution that can lead a revolution in the application of data and analytics across the Company, focused on building self-serve analytics, advanced analytics using machine learning methods Deep User Understanding, Audience Segmentation for Linear to Digital Ad Sales.
We need an experienced Data Engineer who drive multiple data initiatives applying innovative architecture that can scale in the cloud. We are looking fora creative and talented individual who loves to design a scalable platform that scale at petabyte level and extract value from both structured and unstructured real-time data. Specifically, we are looking for a technology leader to build a highly scalable and extensible Big Data platform that enables the collection, storage, modeling, and analysis of massive data sets from numerous channels. You must be self-driven to continuously evaluate new technologies, innovate and deliver solutions for business-critical applications with little to no oversight from the management team. The internet-scale platforms that you design and build will be a core asset in delivering the highest quality content to over 150MM+ consumers on monthly basis. This is an opportunity to fundamentally evolve how DTCI delivers content and monetizes our audiences.
Hours: 8:00am to 5:00pm
- Build cool things – Build a scalable analytics solution, including data processing, storage, and serving large-scale data through batch and stream, analytics for both behavioral & ad revenue through digital & non-digital channels.
- Harness curiosity – Change the way how we think, act, and utilize our data by performing exploratory and quantitative analytics, data mining, and discovery.
- Innovate and inspire – Think of new ways to help make our data platform more scalable, resilient, and reliable and then work across our team to put your ideas into action.
- Think at scale - Lead the transformation of a peta-byte scale batch-based processing platform to a near-real-time streaming platform using technologies such as Apache Kafka, Cassandra, Spark, and other open-source frameworks
Have pride – Ensure performance isn’t our weakness by implementing and refining robust data processing using Python, Java, Scala, and other database technologies such as Redshift or Snowflake
Grow with us – Help us stay ahead of the curve by working closely with data architects, stream processing specialists, API developers, our DevOps team, and analysts to design systems that can scale elastically in ways that make other groups jealous.
Lead and coach – Mentor other software engineers by developing re-usable frameworks. Review design and code produced by other engineers.
ML First - Provide expert-level advice to data scientists, data engineers, and operations to deliver high-quality analytics via machine learning and deep learning via data pipelines and APIs.
Build and Support – Embrace the DevOps mentality to build, deploy and support applications in the cloud with minimal help from other teams
- 2+ years of development experience in in Key-Value store databases like DynamoDB, Cassandra, ScyllaDB, etc.
- 2+ years of development experience in Graph Databases like AWS Neptune, Neo4J, JanusGraph etc.
- Have 4+ years of experience developing a data-driven application using a mix of languages (Java, Scala, Python, SQL etc.) and open-source frameworks to implement data ingest, processing,and analytics technologies.
- Data and API ninja –You are also very handy with big data frameworks such as Hadoop, Apache Spark, No-SQL systems such as Cassandra or DynamoDB, Streaming technologies such as Apache Kafka;
- Understand reactive programming and dependency injection such as Spring to develop REST services.
- Have a technology toolbox – Hands-on experience with newer technologies relevant to the data space such as Spark, Airflow, Apache Druid, Snowflake (or any other OLAP databases).
- Cloud-First - Plenty of experience with developing and deploying in a cloud-native environment preferably AWS cloud
- Prior experience building internet scale platforms – handling Petabyte scale data, operationalizing clusters with hundreds of compute nodes in the cloud environment.
- Experience in operationalizing Machine Learning workflows to scale will be a huge plus as well.
- Experience with Content Personalization/Recommendation, Audience Segmentation for Linear to Digital Ad Sales, and/or Analytics
- Experience with an open source such as Spring, Hadoop, Spark, Kafka, Druid, Kubernetes.
- Experience in working with Data Scientists to operationalize machine learning models.
- Proficiency in agile development methodologies shipping features every two weeks. It would be awesome if you have a robust portfolio on Github and/or open-source contributions you are proud to share
Required Education: BS or equivalent
MUST HAVE SKILL SETS:
- Java Spring, Kubernetes, and graph design
- Must have Cassandra and one of or both- Dynomo DB or Graph DataBase- these are must-haves.
- Data modeling skills- must have- how do you layout tables and queries -very important to modeling.
- As this is a Sr. role, the expectation is that you will bring a wealth of knowledge to the role media to experience not necessary
NOTES TO RECRUITER:
- 18-month contract position with the possibility of extension/conversion.
- NO VISA Candidates, NO Corp to Corp.
- Please only submit green card holders or US citizens that can work on a W2.
- Interviews- 2 interviews- initial and then panel.
- THIS ROLE IS NOT A DBA - WE ARE NOT LOOKING FOR A DBA. We are looking for a developer who can develop in Cassandra or graph.