Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Real time Data Streaming Engineer (Kafka& Flink& Spark Streaming).
Czech Republic Jobs Expertini

Urgent! Real-time Data Streaming Engineer (Kafka& Flink& Spark Streaming) Job Opening In Prague – Now Hiring Confidential

Real time Data Streaming Engineer (Kafka& Flink& Spark Streaming)



Job description

We are seeking a highly skilled Real-Time Data Streaming Engineer with strong expertise in designing and building low-latency, event-driven data pipelines.

You will be responsible for architecting, implementing, and optimizing streaming platforms using Apache Kafka, Apache Flink, and Spark Streaming.

The ideal candidate has hands-on experience with large-scale, high-throughput systems, ensuring data reliability, scalability, and performance for enterprise-grade solutions.

Details:
Location: Remote in EU
Employment Type: Full-Time, B2B Contract
Start Date: ASAP
Language Requirements: Fluent English

Key Responsibilities
  • Design, develop, and maintain real-time data streaming pipelines using Kafka, Flink, and Spark Streaming.
  • Architect event-driven and microservices-based solutions for real-time analytics and processing.
  • Implement data ingestion, transformation, and enrichment workflows across distributed systems.
  • Optimize performance and scalability of streaming jobs for high-throughput, low-latency environments.
  • Ensure data quality, governance, and fault tolerance within the streaming infrastructure.
  • Integrate streaming solutions with data warehouses, data lakes, and cloud platforms (AWS, Azure, GCP).
  • Collaborate with data engineers, data scientists, and application teams to deliver business-critical real-time insights.
  • Monitor, troubleshoot, and improve the reliability and resilience of streaming systems.
  • Participate in system design, code reviews, and best practices development.

Requirements
  • 5+ years of experience in data engineering with at least 3+ years focused on real-time streaming.
  • Strong expertise in Apache Kafka (producers, consumers, Connect, Streams, schema registry).
  • Hands-on experience with Apache Flink or Spark Streaming for real-time data processing.
  • Solid understanding of event-driven architectures, pub/sub systems, and distributed computing.
  • Strong programming skills in Java, Scala, or Python.
  • Proficiency in SQL and experience with databases (relational and NoSQL: PostgreSQL, Cassandra, MongoDB, etc.).
  • Familiarity with cloud-native streaming solutions (AWS Kinesis, Azure Event Hubs, GCP Pub/Sub).
  • Knowledge of CI/CD, containerization (Docker, Kubernetes), and monitoring tools (Prometheus, Grafana, ELK).
  • Strong problem-solving and debugging skills, with experience in large-scale production environments.

Nice to Have
  • Knowledge of data lakehouse architectures (Delta Lake, Iceberg, Hudi).
  • Experience with machine learning pipelines on streaming data.
  • Familiarity with message brokers (RabbitMQ, Pulsar, ActiveMQ).
  • Background in industries like fintech, telecom, IoT, or e-commerce where real-time data is critical.
  • Contributions to open-source streaming projects.


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Real time Potential: Insight & Career Growth Guide