Forgotten Password

Commerce City, Colorado Spectrum
Company Overview:
Charter Communications is America's fastest growing TV, Internet and voice company. We?re committed to integrating the highest quality service with superior entertainment and communications products. Charter is at the intersection of technology and entertainment, facilitating essential communications that connect 24 million residential and business customers in 41 states. Our commitment to serving customers and exceeding their expectations is the bedrock of Charter's business strategy and it's the philosophy that guides our 90,000 employees.

JOB SUMMARY
The Advanced Analytics group has implemented and is operating a new advanced Big Data analytics platform that enables new self-service analytics, decision engineering support, machine learning, modeling, forecasting, and optimizations in Charter's Advanced Engineering organization. This position is responsible to create and maintain scalable, reliable, consistent and repeatable platforms, systems, and models that support data, data products, and analytical products for advanced analytics by gathering, processing, exploring, and modeling raw and diverse data at scale. It requires lifecycle management of multiple data sources, data discovery, and models. This position architects and delivers models, analytics, automation, and self-service, which are major tenets of the analytics platform.

The most successful candidate will have lifecycle experience in machine learning, deep learning, and/or artificial intelligence within modern parallel environments, such as GPU-enabled platforms and distributed clusters. This lifecycle includes conceiving, developing, prototyping, analyzing, and implementing models with very large (ie, the top end of big data) and diverse datasets and in collaboration with stakeholders. Desirable experience includes using big data platforms for sources (eg, Snowflake, AWS/s3, and Hadoop) to create deep learning and machine learning models with GPU-based Python (eg, NVIDIA/DGX/RAPIDS).

MAJOR DUTIES AND RESPONSIBILITIES
Frames and models meaningful business and engineering scenarios that impact critical business and engineering processes, architectures, and/or decisions

Researches, develops, and implements machine learning models, deep learning models, artificial intelligence, and algorithms that solve business and engineering problems

Develops innovative and effective approaches to solve model problems while communicating the results and methodologies

Leverages multiple data sources to produce products that solve the needs of business and engineering for descriptive models, predictive models, diagnostic models, and prescriptive models

Applies data mining, data discovery, machine learning, deep learning, and artificial intelligence techniques to large structured and unstructured datasets for model creation

Gathers and processes raw data at scale from any source and in any format

Balances workloads and operational demands with open source technologies, cloud services, and commercial solutions while optimizing cost and time-to-solution demands

Creates highly reusable code modules, templates, and packages that can be leveraged across model lifecycle

Increases speed to delivery by architecting and implementing automated solutions

Mentors, educates, and provides senior leadership to data scientists, data analysts, and data engineers

REQUIRED QUALIFICATIONS
Ability to create proof of concept experiments for analytics, machine learning, or visualization tools that includes hypothesis, test plans, and outcome analysis.

Ability to create unsupervised and supervised models. Time series expertise is a plus.

Solid statistical knowledge and techniques.

Exceptional programming skills in Python

Experience with one of the deep learning frameworks, such as TensorFlow or PyTorch.

Develop highly scalable systems, algorithms, and tools to support machine learning, deep learning, and artificial intelligence solutions.

Lifecycle management using Docker and containerization.

Experience with Snowflake, AWS, Hadoop, and/or Spark.

Excellent pattern recognition and predictive modeling skills.

Extracting data and delivering complete analytics and machine learning products primarily using Python; secondary experience with R, Java, Javascript, C, C++, Scala, or Julia is an asset.

Work closely with other data scientists, data engineers, and systems/infrastructure teams to deploy end-to-end models capable of serving production traffic in real-time.

Develop, integrate and optimize model/AI/ML/DL pipelines.

Leverage modern parallel environments, eg distributed clusters, multicore SMP, GPU, TPU and FPGA.

Design and implement feedback loops for continual model updates and improvements

Experience receiving, converting, and cleansing big data.

Visualization or BI tools, such as Tableau, Microstrategy, RapidMiner, or anything Microsoft Power BI.

Program, product, or project management experience delivering model and analytics results.

Strong background in Linux/Unix/CentOS/MacOS and Windows installation and administration.

Keen attention to detail with the ability to effectively prioritize and execute multiple tasks.

Ability to read, write, speak and understand English.

PREFERRED QUALIFICATIONS
Jupyter Notebooks, NVIDIA GPU-based Deep Learning, and NVIDIA RAPIDS.
Familiarity with APIs: Javascript API, Rest API or Data Extract APIs.
Familiarity with data workflow/data prep platforms, such as Alteryx, Pentaho, or KNIME.
Familiarity with DevOps/CI/CD/automation/configuration/orchestration management using Puppet, Chef, Ansible, Jenkins, Kubernetes, Airflow, or equivalents.
Knowledge of best practices and IT operations in an always-up, always-available service.
Knowledge of Agile, Scrum, and SAFe environments. Experience delivering within these environment a plus.

Education
Bachelor's degree in a data science, engineering discipline, computer science, statistics, applied math, or related field required.
Master's degree in a data science, engineering discipline, computer science, statistics, applied math, or related field preferred.

Related Work Experience
3-5+ years of experience in one or more of the following areas: machine learning, deep learning, recommendation systems, natural language processing, fraud detection or artificial intelligence
3-5+ years of experience of lifecycle management in one or more of the following from idea to deployment: artificial intelligence, deep learning models, machine learning models, recommendation systems, natural language processing, or fraud detection.
3-5+ years of experience in data engineering position and/or software development.
Experience delivering multiple systems where candidate was responsible for designing the architecture, implementing, operating, supporting, and managing the release lifecycle from inception to end of life.
7-10+ years of hands-on working experience with Python, RDBMS, SQL, scripting, and coding.
Proven experience in translating insights into business recommendations.

WORKING CONDITIONS
Charter Technical Engineering Center

Highly collaborative and innovative work space

Occasional Travel