Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.Job Responsibilities :This role is required to design and build complex big data processing systems that enable the organization to collect, manage, and convert raw data into usable information for data scientists and business analysts, and enabling the use of Large Language Models (LLMs). He / She is responsible for developing and implementing data pipelines, data storage systems, and data integration processes, while working closely with data scientists and machine learning engineers to utilize LLMs.Essential Duties and ResponsibilitiesDesign and implement data systems that enable the organization to store, process, and analyze large volumes of data. This involves developing data pipelines, designing data storage systems, and ensuring that data is integrated effectively across the organization, including the support of LLMs.Manage data lakes and data warehouses by populating and operationalizing them. This involves creating and managing table schemas, views, materialized views, including tokenization techniques for fine-tuning LLMs.Develop and template data pipelines that enable extract, transform, and load operations from various sources. This involves using cloud computing tools to build streaming and batch processing pipelines that handle large volumes of data which can also be used for training and fine-tuning LLMs.Building analytical tools to ensure data quality, while managing the metadata. This involves having data validation checks, lineage tracking, data cataloging and the building of a knowledge base.Collaborate with data scientists and machine learning engineers to understand the data needs of various stakeholders across the organization and integrate appropriate solutions, including LLMs that meet those needs.Develop and implement jobs and cloud infrastructure that are in line with company's security policies and practices, as well as cost optimization practices. This involves implementing data security best practices, developing data encryption, access control policies, monitoring for security breaches and cost implications.RequirementsAbility to work in fast paced, high pressure, agile environment.Ability and willingness to learn any new technologies and apply them at work in order to stay ahead.Strong in programming languages such as Python, SQL.Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like KubernetesExperience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential.Experience in network infrastructureExcellent problem-solving and analytical skills, with an understanding of LLM technologies and their applicationsExcellent written and verbal communication skills for coordinating across teams.Experience and Interest to keep up with the advancement in the Data Engineering FieldHas a Bachelor's or Master’s degree in computer science or similar discipline.2+ years of experience in software engineering and data engineering.Pre-Requisites :Are you game?
View Original Job Posting