Data Engineer
Posted 5 days 8 hours ago by N Consulting Limited
Role : Data Engineer
Work Mode : Hybrid
Contract Role
Location : London OR Edinburgh
Experience : 10 -12 Years
Job Description:
Design, develop, and maintain ETL processes to move data from source systems to Snowflake, Teradata and SQL Server environments.
Build scalable data pipelines for processing and storing large datasets.
Collaborate with analysts, and business stakeholders to define and implement data strategies.
Optimize database performance, including query optimization, indexing, and partitioning across all enterprise data platforms.
Work with cross-functional teams to gather requirements and develop effective data solutions.
Ensure data quality, consistency, and integrity throughout the data lifecycle.
Maintain and improve existing data architectures and workflows to meet business requirements.
Create and maintain documentation for data systems, pipelines, and processes.
Monitor and troubleshoot data pipeline performance issues and resolve them promptly.
Assist in the migration and integration of data from legacy systems into Snowflake, and other strategic data stores.
Perform data transformation tasks using SQL, Snowflake SQL, and relevant data processing tools.
Required Skills and Qualifications:
Proven experience working with SQL Server (e.g., T-SQL, Stored Procedures, Indexing, Query Optimization, System Catalog Views).
Strong experience in Snowflake architecture, including data loading, transformation, and performance tuning.
Proficient in ETL processes using tools such as Informatica PowerCenter and BDM, AutoSys, Airflow, and SQL Server Agent.
Experience with cloud platforms preferably AWS.
Strong knowledge of AWS cloud services, including EMR, RDS Postgres, Redshift Athena, S3, and IAM.
Solid understanding of data warehousing principles and best practices.
Strong proficiency in SQL for data manipulation, reporting, and optimization.
Knowledge of data modeling and schema design.
Experience working with large, complex datasets and implementing scalable data pipelines.
Familiarity with version control tools such as GitLab.
Experience with data integration, data governance, and security best practices.