At ThreatMark, we build trust and safety in the digital world through state-of-the-art behavioral profiling solutions supported by AI-based security platforms. Our products deliver payment protection, ensure user identity, and detect cyber threats for businesses around the world. We’re detecting when a customer’s behavior seems out of character, blocking fraud in real-time – helping organizations outsmart criminals.
We’ve had consecutive years of outstanding growth and commercial success, and our team is enthusiastic, dynamic, and ambitious. Our mission is to make ThreatMark’s technology the No.1 risk management system globally.
We have recently secured a new round of funding, and we are currently in a phase of rapid growth, investing heavily into the product – improving the architecture, adding multi-tenancy, increasing automation, etc.
We are looking for a Data Engineer – a DevOps wizard who will help us build data pipelines, adopt new backbone technologies to support our platform, etc.
Some example challenges:
- Setup and tune Apache Kafka
- Acting as a message broker for real-time asynchronous message passing between microservices (<10ms latency required)
- Streaming platform (analysis of data in time windows) – tools like ksqlDB
- High throughput (tens of thousands of devices concurrently sending sensor data)
- Working both on-prem (k8s) and multi-tenant SaaS cloud (k8s)
- Automatically scalable & elastic (k8s operator ?)
- Highly available
- Setup and tune data lake
- Ingesting data from all clients and multiple internal systems
- Hosted in AWS
- Accessible by researchers and product team (data analyst)
- Create ML pipelines for automated model training (millions of data points)
- Deal with requirements on encryption, consistency, and data isolation
Do you feel excited thinking about how to deal with such problems?
- Can you describe how the Zookeeper leader election works in the middle of the night? You are our guy!
- Can you calculate to which slot in Redis will a key be assigned? You are our guy!
- Do you know that Docker is just a bunch of glue on top of kernel namespaces that have been there for like 20 years? You are our guy!
If you prefer to wear a suit and not a dirty t-shirt, there is a more structured description.
What you’ll do
As a Data Engineer, you will code the infrastructure problems away by
- Designing/Implementing scalable, low-latency, high-throughput, fault-tolerant, extensible, and easily maintainable data processing pipelines for real-time systems
- Designing the deployment processes and pipelines so we can upgrade without downtime
- Making prototypes and proof of concepts
- Making performance tests
We will also expect you to share your experience with the rest of the team and mentor them on DevOps.
What skills you’ll need
We expect you to have those or to rapidly acquire them
- DevOps mindset – e.g., “cattle over pet mindset,” Infrastructure As A Code, Immutable infrastructure, Chaos engineering, etc.
- Cloud engineering (AWS)
- Software architecture – architecture types, characteristics, understanding fallacies of distributed computing, etc.
- Programming (anything Turing complete, but mostly Python)
- Computer science skills (algorithms and structures, complexity, information theory, etc.) at the university level (we care about the skills, not the degree)
- Strong debugging, testing, tuning, and problem-solving skills
- Knowledge of the IT world – Linux, operating systems, network protocols, etc.
What would make you a strong fit
- 5+ years of professional software development experience
- Demonstrable track record of exceptional software engineering skills on past projects
- Experience designing and building multi-tenant, cloud-hosted systems
- Experience building highly available low-latency systems using any programming language.
- Experience working with large datasets and data processing technologies (stream processing)
- Strong communication & collaboration skills
- Self-starter with a quick learning curve.
Our team culture
- Value is what we care about – taking a few days of vacation and coming up with a creative solution might be more valuable than coding 20 hours a day
- Mistakes are good as long as you openly share them, learn from them, and don’t repeat them too often
- The output of the team is more than an individual contribution or sum of the parts
- Deep understanding and continuous learning
- Clear and transparent communication
- No excuses
- Python 3.9 (AsyncIO)
- Docker, k8s, AWS
- Terraform, Ansible, Gitlab CI
Benefits and Perks
- Opportunity to solve complex problems and see results fast and on a large scale (tens of millions of users now and hundreds of millions in the near future)
- The salary that really reflects your skills and contribution to the success of the company
- All the tech and tools you need to succeed are available
- Flexible cooperation agreements (OSVC, full-time employee, etc.)
- Subsidized sport activities
- 5 weeks of vacation
- Friendly work environment, equally open to anyone
- Flexible time off
If you understand Czech, we recommend you to listen to this SCRIPTease podcast where our CTO Kryštof Hilar discusses technology and stack behind ThreatMark: