Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.Job Responsibilities :This role is required to design and build complex big data processing systems that enable the organization to collect, manage, and convert raw data into usable information for data scientists and business analysts. He / She is responsible for developing and implementing data pipelines, data storage systems, and data integration processes.Essential Duties and ResponsibilitiesDesign and implement data systems that enable the organization to store, process, and analyze large volumes of data. This involves developing data pipelines, designing data storage systems, and ensuring that data is integrated effectively across the organization.Manage data lakes and data warehouses by populating and operationalizing them. This involves creating and managing table schemas, views, materialized views so as to scale its operational use for all stakeholders.Develop and template data pipelines that enable extract, transform, and load operations from various sources. This involves using cloud computing tools to build streaming and batch processing pipelines that handle large volumes of data, depending on the business requirements.Building analytical tools to ensure data quality, while managing the metadata. This involves having data validation checks, lineage tracking and data cataloging.Collaborate with data scientists and analysts to understand the data needs of various stakeholders across the organization and develop appropriate solutions that meet those needs.Develop and implement jobs and cloud infrastructure that are in line with company's security policies and practices, as well as cost optimization practices. This involves implementing data security best practices, developing data encryption, access control policies, monitoring for security breaches and cost implications.RequirementsAbility to work in fast paced, high pressure, agile environment.Ability and willingness to learn any new technologies and apply them at work in order to stay ahead.Strong in programming languages such as Python, SQL.Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like KubernetesExperience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential.Experience in network infrastructureExcellent problem-solving and analytical skillsExcellent written and verbal communication skills for coordinating across teams.Experience and Interest to keep up with the advancement in the Data Engineering FieldHas a Bachelor's or Master’s degree in computer science or similar discipline.2+ years of experience in software engineering and data engineering.Pre-Requisites :Are you game?
View Original Job Posting