Data Engineer

Posted 12 days 11 hours ago by Koda Staff

Permanent
Not Specified
Other
Brussel, Belgium
Job Description

Job Title: Freelance Data Engineer

Location: Brussels, Belgium (Remote flexibility available)

Job Type: Freelance / Contract

Duration: 6+ months (with possibility for extension)


About the Role:

We are looking for a skilled and driven Freelance Data Engineer to join our client and help them build, optimize, and maintain data pipelines for our data-driven solutions. The ideal candidate will have extensive experience in designing and developing data architectures, ensuring data quality, and working with large datasets to provide insights that drive decision-making across the business.

You will be working closely with data scientists, analysts, and other engineering teams to ensure the efficient handling, storage, and retrieval of data. This role will be based in Brussels, with some flexibility for remote work.


Key Responsibilities:

  • Design, implement, and optimize scalable data pipelines for ingesting and transforming data from various sources.
  • Develop and maintain data infrastructure to ensure smooth and efficient data processing workflows.
  • Work with cloud platforms (AWS, Azure, GCP) and on-premise solutions to handle large-scale data storage and processing.
  • Collaborate with data scientists and business analysts to ensure that data is accessible, reliable, and processed in a timely manner.
  • Implement data models and ensure data integrity and quality across all systems.
  • Monitor and troubleshoot data pipelines, ensuring minimal downtime and quick resolution of issues.
  • Continuously improve data architecture by staying up-to-date with the latest tools, technologies, and best practices in the industry.
  • Assist with the deployment and scaling of data-driven applications and solutions.


Key Requirements:

  • Proven experience (3+ years) as a Data Engineer or in a similar data-focused role.
  • Strong proficiency in data engineering technologies such as SQL, Python, and tools like Apache Kafka, Spark, or similar frameworks.
  • Solid understanding of cloud data platforms (AWS, Azure, Google Cloud) and distributed computing principles.
  • Experience with ETL processes and building efficient data pipelines.
  • Familiarity with data warehousing concepts and technologies (Redshift, Snowflake, BigQuery, etc.).
  • Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
  • Experience with version control systems such as Git.
  • Strong problem-solving skills, attention to detail, and the ability to work independently.
  • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.


Nice to Have:

  • Experience with machine learning models and their integration into data pipelines.
  • Familiarity with data visualization tools like Power BI, Tableau, or Looker.
  • Understanding of data privacy regulations (GDPR).


What We Offer:

  • Competitive freelance rate based on experience.
  • Flexibility to work remotely with occasional office presence in Brussels.
  • Opportunity to work with a dynamic team and cutting-edge technologies.