The Disney Decision Science and Integration (DDSI) analytics consulting team is responsible for supporting clients across The Walt Disney Company including Direct-to-Consumer & International, Media Networks (e.g., ABC, ESPN), Studio Entertainment (e.g., The Walt Disney Studios, Disney Theatrical Group) and Parks, Experiences & Consumer Products. DDSI leverages technology, data analytics, optimization, statistical and econometric modeling to explore opportunities, shape business decisions and drive business value.
The Data Engineering team (within DDSI) is seeking candidates for roles at multiple levels. The specific job level for an individual candidate will be determined based on their education, prior experience, and demonstrated technical and leadership proficiencies.
The team is involved in various activities ranging from data acquisition and validation, designing and implementing ETL/ELT data pipelines, designing and implementing databases, and evolving our next generation data platform to fulfill the needs of our applications, data services, ad-hoc analytics and self-service/POC initiatives.
Work assignments may cover activities such as data requirements gathering, source-to-target mapping, data validation scripting and review, data visualization, developing and monitoring ETL/ELT data pipelines, designing and implementing database schema/tables/views, implementing data services API’s, performance tuning, and evolving our data analytics platform. Beyond the needs of projects, team members may participate in the architectural evolution of our data engineering patterns, frameworks, systems, and platforms including defining best practices, standards, principles, and policies.
Technologies generally leveraged to fulfill the work include, but not limited to, SQL, Python, Spark, PySpark, SparkSQL, Hadoop/Hive, Docker, Gitlab, Airflow, Kafka, Lambda, Snowflake, and PostgreSQL.
Experience with Python
Experience with SQL
Knowledgeable in designing, building and maintaining ETL/ELT data pipelines
Strong understanding of relational database design and proficiency utilizing a database such as PostgreSQL, Teradata, Redshift or MySQL, Snowflake
Solid understanding in the differences between application databases vs. data warehouses vs. data lakes
Experience working with large datasets and big data technologies, preferably cloud-based, such as Snowflake, Databricks, or similar
Knowledgeable on cloud architecture and product offerings, preferably AWS
Experience with developing in a multi environment (Dev, QA, Prod, etc.) and procedures for code deployment/promotion.
Experience managing and deploying code using a source control product such as GitLab/GitHub
Experience with Spark, PySpark and/or Scala
Experience utilizing Hadoop and/or Hive
Experience leveraging containerization technologies such as Docker or Kubernetes
Familiarity with data streaming vs. batch data processing
Hands-on knowledge of job scheduling software like Apache Airflow, Amazon MWAA, or UC4
Experience leveraging AWS Glue, Apache Kafka, Talend or Apache NiFi
Understanding of NoSQL databases and best use scenarios for products such as MongoDB, Cassandra, Neo4J or Redis
Involved in driving best practices around data engineering software development processes
Bachelor’s degree (Computer Science, Mathematics, Engineering or related field preferred)
Master’s degree (Computer Science, Mathematics, Engineering or related field preferred)