How Our AI System Fights Against Frauds in International Shipping

In the world of logistics, fraudulent and dangerous packages are one of the industry's biggest challenges. That's why a major multinational logistics company turned to BigHub for help in implementing a system for early detection. With a goal of deploying a solution for real-time evaluation of shipments as they enter the transportation network, our team at BigHub faced several challenges such as scaling the REST API and managing the ML lifecycle.

BigHub has a longstanding partnership with a major international logistics firm, during which it has successfully implemented a diverse range of data projects. These projects have encompassed a variety of areas, including data engineering, real-time data processing, cloud and machine learning-based applications, all of which have been designed and developed to enhance the logistics company's operations, including warehouse management, supply chain optimization and the transportation of thousands of packages globally on a daily basis.

 

In 2022, BigHub was presented with a new challenge: to aid in the implementation of a system for the early detection of suspicious fraudulent shipments entering the company's logistic network. Based on the client's pilot solution, which had been developed and tested using historical data, BigHub improved the algorithms and deployed them in a production environment for real-time evaluation of shipments as they entered the transportation network. The initial pilot solution was based on batch evaluation, but the requirement for our team was to create a REST API that could handle individual queries with a response time of less than 200 milliseconds. This API would be connected to the client's network, where further operations would be carried out on the data.

High-level Architecture

The proposed application is designed with a high-level architecture, as illustrated in the accompanying diagram. The core of the system is the REST API, which is connected to the client's network to receive and process queries. These queries are subject to validation and evaluation, with the results then returned to the end user. The data layer serves as the foundation for the calculations, as well as for the training of models and pre-processing of feature tables. The evaluation results are also stored in the data layer to facilitate the production of summary analyses in the reporting layer. The MLOps layer manages the lifecycle of the machine learning model, including training, validation, storage of metrics for each model version and making the current version of the model accessible via the REST API. To achieve this, the whole solution leverages a variety of modern data technologies, including Kubernetes, MLFlow, AirFlow, Teradata, Redis and Tableau.

 

During the development of the system our team needed to address several challenges that include:

  • Setup and scaling of the REST API to handle a high volume of queries (260 queries from 30 parallel resources per second) in real-time, ensuring it is ready for global deployment.
  • Optimizing the evaluation speed of individual queries, through the use of low-level programming techniques, to reduce the time from hundreds of milliseconds to tens of milliseconds.
  • Managing the machine learning model lifecycle, including automated retraining, deployment of new versions into API, monitoring of quality and notifications, to ensure reliable long-term performance.
  • Implementing modifications on the run - our agile approach ensured flexibility and allowed quick and successful changes to the ongoing project for the satisfaction of both parties and better results.

Conclusion

We are proud to have successfully deployed the solution in a production environment within six months. Our ongoing performance monitoring and validation evaluations for 12 origin countries have been successful and countries are gradually added and tested over time. The goal is to roll out the application globally within the first half of the 2023.

Dive into similar articles

The latest industry news, interviews, technologies, and resources.

AI
0
min
read

EU AI Act: What It Is, Who It Applies To, and How We Can Help Your Company Comply Stress-Free

In 2024, the so-called AI Act came into effect, becoming the first comprehensive European Union law regulating the use and development of artificial intelligence. Which companies does it affect, how can you avoid draconian fines, and how does it work if you want someone else, like BigHub, to handle all the compliance concerns for you? The development of artificial intelligence has accelerated so rapidly in recent years that legislation must respond just as quickly. At BigHub, we believe this is a step in the right direction.
What the AI Act is and why it was introduced

The AI Act is the first EU-wide law that sets rules for the development and use of artificial intelligence. The rationale behind this legislation is clear: only with clear rules can AI be safe, transparent, and ethical for both companies and their customers.

Artificial intelligence is increasingly penetrating all areas of life and business, so the EU aims to ensure that its use and development are responsible and free from misuse, discrimination, or other negative impacts. The AI Act is designed to protect consumers, promote fair competition, and establish uniform rules across all EU member states.

Who the AI act applies to

The devil is often in the details, and the AI Act is no exception. This legislation affects not only companies that develop AI but also those that use it in their products, services, or internal processes. Typically, companies that must comply with the AI Act include those that:

  • Develope AI

  • Use AI for decision-making about people, such as recruitment or employee performance evaluation

  • Automate customer services, for example, chatbots or voice assistants

  • Process sensitive data using AI

  • Integrate AI into products and services

  • Operate third-party AI systems, such as implementing pre-built AI solutions from external providers

The AI Act distinguishes between standard software and AI systems, so it is always important to determine whether a solution operates autonomously and adaptively, meaning it learns from data and optimizes its results, or merely executes predefined instructions, which does not meet the definition of an AI solution.

Importantly, the legislation applies not only to new AI applications but also to existing ones, including machine learning systems.

To save you from spending dozens of hours worrying whether your company fully complies, BigHub is ready to handle AI Act implementation for you.

What the AI Act regulates

The AI Act defines many detailed requirements, but for businesses using AI, the key areas to understand include:

1. Risk classification

The legislation categorizes AI systems by risk level, from minimal risk to high risk, and even banned applications.

2. Obligations for developers and operators

This includes compliance with safety standards, regular documentation, and ensuring strict oversight.

3. Transparency and explainability

Users of AI tools must be aware they are interacting with artificial intelligence.

4. Prohibited AI applications

For example, systems that manipulate human behavior or intentionally discriminate against specific groups.

5. Monitoring and incident reporting

Companies must report adverse events or malfunctions of AI systems.

6. Processing sensitive data

The AI Act regulates the use of personal, biometric, or health data of anyone interacting with AI tools.

Avoid massive fines

Penalties for non-compliance with the AI Act are high, potentially reaching up to 7% of a company’s global revenue, which can amount to millions of euros for some businesses. 

This makes it crucial to implement the new AI regulations promptly in all areas where AI is used.

Let us handle AI Act compliance for you

Don’t have dozens of hours to study complex laws and don’t want to risk huge fines? Why not let BigHub manage AI Act compliance for your company? We help clients worldwide implement best practices and frameworks, accelerate innovation, and optimize processes, and we are ready to do the same for you.

We offer turnkey AI solutions, including integrating AI Act compliance. Our process includes:

  • Creating internal AI usage policies for your company

  • Auditing the AI applications you currently use

  • Ensuring existing and newly implemented AI applications comply with the AI Act

  • Assessing risks so you know which AI systems you can safely use

  • Mapping your current situation and helping with necessary documentation and process obligations

AI
0
min
read

Databricks Mosaic vs. Custom Frameworks: Choosing the Right Path for GenAI

Generative AI today comes in many forms – from proprietary APIs and frameworks (such as Microsoft’s Response API or Agent AI Service), through open-source frameworks, to integrated capabilities directly within data platforms. One option is Databricks Mosaic, which provides a straightforward way to build initial GenAI applications directly on top of an existing Databricks data platform. At BigHub, we work with Databricks on a daily basis and have hands-on experience with Mosaic as well. We know where this technology delivers value and where it begins to show limitations. In some cases, we’ve even seen clients push Databricks Mosaic as the default choice, only to face unnecessary trade-offs in quality and flexibility. Our role is to help clients make the right call: when Mosaic is worth adopting, and when a more flexible custom framework is the smarter option.
Why Companies Choose Databricks Mosaic

For organizations that already use Databricks as their data platform, it is natural to also consider Mosaic. Staying within a single ecosystem brings architectural simplicity, easier management, and faster time-to-market.

Databricks Mosaic offers several clear advantages:

  • Simplicity: building internal chatbots and basic agents is quick and straightforward.
  • Governance by design: logging, lineage, and cost monitoring are built in.
  • Data integration: MCP servers and SQL functions allow agents to work directly with enterprise data.
  • Developer support: features like Genie (a Fabric Copilot competitor) and assisted debugging accelerate development.

For straightforward scenarios, such as internal assistants working over corporate data, Databricks Mosaic is fast and effective. We’ve successfully deployed Mosaic for a large manufacturing company and a major retailer, where the need was simply to query and retrieve data.

Where Databricks Mosaic Falls Short

More complex projects introduce very different requirements – around latency, accuracy, multi-agent logic, and integration with existing enterprise systems. Here, Databricks Mosaic quickly runs into limits:

  • Structured output: Databricks Mosaic cannot effectively enforce structured output, which impacts the quality and operational stability of various solutions (e.g., voicebots or OCR).
  • Multi-step workflows: processes such as insurance claims, underwriting, or policy issuance are either unfeasible or overly complicated within Databricks Mosaic.
  • Latency-sensitive scenarios: Databricks Mosaic adds an extra endpoint layer between user and model, which makes low-latency use cases difficult.
  • Integration outside Databricks: unless you only use Vector Search and Unity Catalog, connecting to other systems is more complex than in a Python-based custom framework.
  • Limited model catalog: only a handful of models are available. You cannot bring your own models or integrate models hosted in other clouds.

Even Databricks itself admits Mosaic isn’t intended to replace specialized frameworks. That’s true to a degree, but the overlap is real – and in advanced use cases, Mosaic’s lack of flexibility becomes a bottleneck.

Where a Custom Framework Makes Sense

A custom framework shines where projects demand complex logic, multi-agent orchestration, streaming, or low-latency execution:

  • Multiple agents: agents with different roles and skills collaborating on a single task.
  • Streaming and real-time: essential for call centers, voicebots, and fraud detection.
  • Custom logic: precisely defined workflows and multi-step processes.
  • Regulatory compliance: full transparency and auditability in line with the AI Act.
  • Flexibility: ability to use any libraries, models, and architectures without vendor lock-in.

This doesn’t mean Databricks Mosaic can’t ever be used for business-critical workloads – in some cases it can. But in applications where latency, structured output, or high precision are non-negotiable, Mosaic is not yet mature enough.

How BigHub Approaches It

From our experience, there’s no one-size-fits-all answer. Databricks Mosaic works well in some contexts, while in others a custom framework is the only viable option.

  • Manufacturing & Retail: We used Databricks Mosaic to build internal assistants that answer queries over corporate data (SQL queries). Deployment was fast, governance was embedded, and the solution fit the use case perfectly.
  • Insurance (Claims Processing): Here, Databricks Mosaic simply wasn’t sufficient. It lacked structured output, multi-agent orchestration, and voice processing. We delivered a custom framework that achieved the required accuracy, supported multi-step workflows, and met audit requirements under the AI Act.
  • Banking (Underwriting, Policy Issuance): Banking workflows often involve multiple steps and integration with core systems. Implementing these in Databricks Mosaic is overly complex. We used a custom middleware layer that orchestrates multiple agents and supports models from different clouds.
  • Call Centers & OCR: Latency-critical applications and use cases requiring structured outputs (e.g. form data extraction, voicebots) are not supported by Databricks Mosaic. These are always delivered using custom solutions.

Our role is not to push a single technology but to guide clients toward the best choice. Sometimes Databricks Mosaic is the right fit, sometimes a custom framework is the only way forward. We ensure both a quick start and long-term sustainability.

Our Recommendation
  • Databricks Mosaic: best suited for organizations already invested in Databricks that want to deploy internal assistants or basic agents with strong governance and monitoring.
  • Custom framework: the right choice when projects require complex multi-step workflows, multi-agent orchestration, structured outputs, or low latency.

At BigHub, we’ve worked extensively with both approaches. What we deliver is not just technology, but the expertise to recommend and build the right combination for each client’s unique situation.

AI
0
min
read

Why MCP might be the HTTP of the AI-first era

MCP (Model Context Protocol) isn’t just another technical acronym. It’s one of the first foundational steps toward a world where digital operations are not driven by people, but by intelligent systems. And while it’s currently being discussed mostly in developer circles, its long-term impact will reshape how companies communicate, sell, and operate in the digital landscape.
What Is MCP – and Why Should You Care?

Model Context Protocol may sound like something out of an academic paper or internal Big Tech documentation. But in reality, it’s a standard that enables different AI systems to seamlessly communicate—not just with each other, but also with APIs, business tools, and humans.

Today’s AI tools—whether chatbots, voice assistants, or automation bots—are typically limited to narrow tasks and single systems. MCP changes that. It allows intelligent systems to:

  • Check your e-commerce order status
  • Review your insurance contract
  • Reschedule your doctor’s appointment
  • Arrange delivery and payment


All without switching apps or platforms. And more importantly: without every company needing to build its own AI assistant. All it takes is making services and processes “MCP-accessible.”

From AI as a Tool to AI as an Interface

Until now, AI in business has mostly served as a support tool for employees—helping with search, data analysis, or faster decision-making. But MCP unlocks a new paradigm:

Instead of building AI tools for internal use, companies will expose their services to be used by external AI systems—especially those owned by customers themselves.

That means the customer is no longer forced to use the company’s interface. They can interact with your services through their own AI assistant, tailored to their preferences and context. It’s a fundamental shift. Just as the web changed how we accessed information, and mobile apps changed how we shop or travel, MCP and intelligent interfaces will redefine how people interact with companies.

The AI-First Era Is Already Here

It wasn’t long ago that people began every query with Google. Today, more and more users turn first to ChatGPT, Perplexity, or their own digital assistant. That shift is real: AI is becoming the entry point to the digital world.

“Web-first” and “mobile-first” are no longer enough. We’re entering an AI-first era—where intelligent interfaces will be the first layer that handles requests, questions, and decisions. Companies must be ready for that.

What This Means for Companies
1. No More Need to Build Your Own Chatbot

Companies spend significant resources building custom chatbots, voice systems, and interfaces. These tools are expensive to maintain and hard to scale.

With MCP, the user shows up with their own AI system and expects only one thing: structured access to your services and information. No need to worry about UX, training models, or customer flows—just expose what you do best.

2. Traditional Call Centers Become Obsolete

Instead of calling your support line, a customer can query their AI assistant, which connects directly to your systems, gathers answers, or executes tasks.

No queues. No wait times. No pressure on your staffing model. Operations move into a seamless, automated ecosystem.

3. New Business Models and Brand Trust

Because users will bring their own trusted digital interface, companies no longer carry the burden of poor chatbot experiences. And thanks to MCP’s built-in structure for access control and transparency, businesses can decide who sees what, when, and how—while building trust and reducing risks.

What This Means for Everyday Users
  • One interface for everything
  • No more juggling dozens of logins, websites, or apps. One assistant does it all.
  • True autonomy
  • Your digital assistant can order products, compare options, request refunds, or manage appointments—no manual effort required.
  • Smarter, faster decisions
  • The system knows your preferences, history, and goals—and makes intelligent recommendations tailored to you.

Practical example:

You ask your AI to generate a recipe, check your pantry, compare prices across online grocers, pick the cheapest options, and schedule delivery—all in one go, no clicking required.

The Underrated Challenge: Data

For this to work, users will need to give their AI systems access to personal data. And companies will need to open up parts of their systems to the outside world. That’s where trust, governance, and security become mission-critical. MCP provides a standardized framework for managing access, ensuring safety, and scaling cooperation between systems—without replicating sensitive data or creating silos.

Get your first consultation free

Want to discuss the details with us? Fill out the short form below. We’ll get in touch shortly to schedule your free, no-obligation consultation.

Trusted by 100 + businesses
Thank you! Your submission has been received.
Oops! Something went wrong.