Requirements: English
Company: Shippeo
Region: Paris , le-de-France
Founded in 2014, Shippeo is a leading European SaaS company specializing in supply chain visibility. Over the past two years, we have experienced rapid growth, securing 140 million in funding, including 30 million raised in January 2025.
At Shippeo, we take pride in our exceptional diversity, with a team comprising 27 nationalities and fluent in 29 languages. With offices spanning Europe, North America, and most recently Asia, we deliver global, multimodal, real-time transportation visibility to customers across industries such as retail, manufacturing, automotive, and CPG.
Our vision is to become the world''s leading data platform for the freight industry. By leveraging our expanding network, real-time data, and AI, we empower supply chains to deliver superior customer experiences while achieving operational excellence.
What Shippeo does
Our product is composed of a mission critical SaaS web platform, with high traffic inbound/outbound integrations. Our mission is to anticipate problems and proactively alert end-customers so they can efficiently manage exceptions. We achieve this by collecting and matching millions of theoretical and real data from different stakeholders.
Shippeo gives visibility to shippers, carriers and customers by answering the following questions:
- where is the truck, and are there any foreseeable delays?
- what has actually been loaded and/or delivered, and are there any discrepancies?
- what are the levers for improvement for the transport operations?
Are you a visionary Senior Data Scientist ready to revolutionize global supply chains with the power of AI? We''re not just looking for someone to build models; we''re seeking a probabilistic powerhouse and machine learning maestro who can unlock unprecedented efficiencies and strategic advantages. Imagine leading the charge in designing, building, and deploying intelligent systems that directly impact the future of supply chains.
What You''ll Do: Drive Innovation at Scale
- Architect & Innovate: Go beyond traditional modeling. You''ll be at the forefront of designing and implementing cutting-edge predictive and prescriptive models that tackle complex supply chain challenges. Think advanced regression, time-series forecasting, and exploring novel Machine Learning techniques to deliver measurable business impact.
- Research & Prototype: Be an intrapreneur. We empower you to conduct independent research, rapidly prototype groundbreaking ideas, and transform them into robust, production-ready solutions. Your intellectual curiosity will be a key driver of our success.
- Build for Impact: Own the entire Machine Learning lifecycle. From meticulously crafting scalable Python code to leveraging PySpark and distributed systems for massive datasets, you''ll ensure our models are not just accurate but also performant and maintainable.
- Seamless Deployment: Your solutions won''t stay in notebooks. You''ll champion CI/CD, Docker, and Kubernetes to deploy and manage high-availability microservices, ensuring our AI-driven insights are continuously delivered to the business.
- Strategic Collaboration: Work hand-in-hand with product leaders, translating complex technical solutions into clear business value. Your insights will directly inform critical strategic decisions and drive tangible outcomes.
What You''ll Bring: Your Toolkit for Transformation
- Mastery of the Craft: An MSc (or higher) in a quantitative field (e.g., Applied Mathematics, Computer Science, Engineering). You''re not just familiar with probability and statistics; you live and breathe them.
- Pythonic Prowess: Expert-level Python skills are non-negotiable, coupled with deep experience in libraries like scikit-learn, Pandas, and NumPy. You write clean, modular, and testable code following best software engineering practices (OOP, abstractions).
- Data Whisperer: You''re fluent in SQL and adept at wrangling complex, large-scale datasets, transforming raw data into actionable insights.
- Distributed Systems Dynamo: Proven experience with Apache Spark (PySpark a plus) and asynchronous computing is essential for handling our expansive data landscape.
- Production Pioneer: You have a strong track record of successfully building, deploying, and maintaining machine learning models in production environments.
- DevOps Savvy: Comfortable with Git, Linux, Docker, and ideally Kubernetes for robust deployment and orchestration. Experience with GCP is a huge bonus.
- Impact-Driven Communicator: You