Data Engineer
Data Engineer | Digital Asset Investment Firm
We are a leading digital asset investment firm that operates with a focus on institutional-grade standards, integrating various business areas such as Alpha Strategies, Trading, and Asset Management. Founded by experienced professionals with deep expertise in finance, blockchain, and technology, we share a common passion for digital assets. Our core values are rooted in openness, connectivity, collaboration, and strong partnerships throughout the digital asset ecosystem. People and technology are central to our mission.
We are looking for an experienced Data Engineer to join our team. In this role, you'll play a critical part in developing and managing our data infrastructure. This position requires strong technical expertise and the ability to work closely with quantitative researchers to process, analyze, and manage large datasets that fuel our quantitative investment strategies.
Key Responsibilities:
Collaborate with quantitative researchers to understand their data needs
Enhance the performance and usability of our petabyte-scale data lake, built with Python
Create high-performance, real-time event-driven datasets using both live and historical market data
Oversee the development, testing, and deployment of machine learning-based trading models
Integrate external APIs and third-party data sources for structured and unstructured data ingestion
Build, refine, and optimize data pipelines for efficient ETL processes that support research, analysis, forecasting, and execution
Implement automated measures to ensure the integrity, accuracy, and consistency of data inputs and outputs
Skills and Qualifications:
At least 3 years of experience in a similar role, preferably within a quantitative hedge fund or proprietary trading firm
- Solid understanding and hands-on experience with L2 and L3 market data
Bachelor's degree in Computer Science or a related discipline
Strong programming skills in Python and Rust, with experience using Linux and Docker
Familiarity with open-source data tools like Apache Arrow, and distributed computing tools such as Ray or Dask
Proven experience in designing and optimizing data pipelines, data modeling, and ETL processes
Expertise in building event-driven applications using tools like Protobuf, Kafka, and Schema Registry
Familiarity with AWS and its data services
Strong collaboration skills, with the ability to work closely with quantitative researchers and translate their needs into technical solutions
Excellent analytical and problem-solving abilities to process large datasets and identify actionable insights
Highly detail-oriented and methodical in your work
Able to thrive in a fast-moving, dynamic work environment and adapt to new technologies
Excellent written and verbal communication skills
FAQs
Congratulations, we understand that taking the time to apply is a big step. When you apply, your details go directly to the consultant who is sourcing talent. Due to demand, we may not get back to all applicants that have applied. However, we always keep your CV and details on file so when we see similar roles or see skillsets that drive growth in organisations, we will always reach out to discuss opportunities.
Yes. Even if this role isn’t a perfect match, applying allows us to understand your expertise and ambitions, ensuring you're on our radar for the right opportunity when it arises.
We also work in several ways, firstly we advertise our roles available on our site, however, often due to confidentiality we may not post all. We also work with clients who are more focused on skills and understanding what is required to future-proof their business.
That's why we recommend registering your CV so you can be considered for roles that have yet to be created.
Yes, we help with CV and interview preparation. From customised support on how to optimise your CV to interview preparation and compensation negotiations, we advocate for you throughout your next career move.