Blog

BigHub Blog - Read, Discover, Get inspired.

The latest industry news, interviews, technologies, and resources.

AI
0
min
read

Why MCP might be the HTTP of the AI-first era

MCP (Model Context Protocol) isn’t just another technical acronym. It’s one of the first foundational steps toward a world where digital operations are not driven by people, but by intelligent systems. And while it’s currently being discussed mostly in developer circles, its long-term impact will reshape how companies communicate, sell, and operate in the digital landscape.

What Is MCP – and Why Should You Care?

Model Context Protocol may sound like something out of an academic paper or internal Big Tech documentation. But in reality, it’s a standard that enables different AI systems to seamlessly communicate—not just with each other, but also with APIs, business tools, and humans.

Today’s AI tools—whether chatbots, voice assistants, or automation bots—are typically limited to narrow tasks and single systems. MCP changes that. It allows intelligent systems to:

  • Check your e-commerce order status
  • Review your insurance contract
  • Reschedule your doctor’s appointment
  • Arrange delivery and payment


All without switching apps or platforms. And more importantly: without every company needing to build its own AI assistant. All it takes is making services and processes “MCP-accessible.”

From AI as a Tool to AI as an Interface

Until now, AI in business has mostly served as a support tool for employees—helping with search, data analysis, or faster decision-making. But MCP unlocks a new paradigm:

Instead of building AI tools for internal use, companies will expose their services to be used by external AI systems—especially those owned by customers themselves.

That means the customer is no longer forced to use the company’s interface. They can interact with your services through their own AI assistant, tailored to their preferences and context. It’s a fundamental shift. Just as the web changed how we accessed information, and mobile apps changed how we shop or travel, MCP and intelligent interfaces will redefine how people interact with companies.

The AI-First Era Is Already Here

It wasn’t long ago that people began every query with Google. Today, more and more users turn first to ChatGPT, Perplexity, or their own digital assistant. That shift is real: AI is becoming the entry point to the digital world.

“Web-first” and “mobile-first” are no longer enough. We’re entering an AI-first era—where intelligent interfaces will be the first layer that handles requests, questions, and decisions. Companies must be ready for that.

What This Means for Companies

1. No More Need to Build Your Own Chatbot

Companies spend significant resources building custom chatbots, voice systems, and interfaces. These tools are expensive to maintain and hard to scale.

With MCP, the user shows up with their own AI system and expects only one thing: structured access to your services and information. No need to worry about UX, training models, or customer flows—just expose what you do best.

2. Traditional Call Centers Become Obsolete

Instead of calling your support line, a customer can query their AI assistant, which connects directly to your systems, gathers answers, or executes tasks.

No queues. No wait times. No pressure on your staffing model. Operations move into a seamless, automated ecosystem.

3. New Business Models and Brand Trust

Because users will bring their own trusted digital interface, companies no longer carry the burden of poor chatbot experiences. And thanks to MCP’s built-in structure for access control and transparency, businesses can decide who sees what, when, and how—while building trust and reducing risks.

What This Means for Everyday Users

  • One interface for everything
  • No more juggling dozens of logins, websites, or apps. One assistant does it all.
  • True autonomy
  • Your digital assistant can order products, compare options, request refunds, or manage appointments—no manual effort required.
  • Smarter, faster decisions
  • The system knows your preferences, history, and goals—and makes intelligent recommendations tailored to you.

Practical example:

You ask your AI to generate a recipe, check your pantry, compare prices across online grocers, pick the cheapest options, and schedule delivery—all in one go, no clicking required.

The Underrated Challenge: Data

For this to work, users will need to give their AI systems access to personal data. And companies will need to open up parts of their systems to the outside world. That’s where trust, governance, and security become mission-critical. MCP provides a standardized framework for managing access, ensuring safety, and scaling cooperation between systems—without replicating sensitive data or creating silos.

Most recent
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
BigHub
View All
0
min
read

BigHack: Turning Real Challenges into AI Solutions

At the forefront of innovation, we've evolved from open-source enthusiasts to cloud experts — and now, we’re diving headfirst into the world of Generative AI. To accelerate our learning and put theory into practice, we hosted an internal hackathon focused entirely on real-life business use cases powered by generative models.
Pushing Boundaries with Generative AI

Artificial intelligence has always been in our DNA. Long before ChatGPT made headlines, we were using early GPT models for tasks like analyzing customer reviews. But the new wave of generative models has fundamentally changed what's possible. To keep up with — and get ahead of — this rapidly evolving space, we launched an internal initiative: BigHack, a hands-on hackathon focused on generative AI and its real-world applications.

We didn’t want to experiment in a vacuum. Instead, we chose real client challenges, making sure our teams were tackling scenarios that reflect the current business environment. The goal? Accelerate learning, build practical experience, and surface innovative AI-powered solutions for tangible problems.

Secure and Scalable: Choosing the Right Platform

One key consideration was how to work with these models in a secure and scalable way. While OpenAI’s public services have sparked mainstream adoption, they come with serious concerns around data privacy and usage. Many corporations have responded by banning them entirely.

That’s why we opted for Microsoft’s Azure OpenAI Service. It allows us to leverage the same powerful models while ensuring enterprise-grade data governance. With Azure, all data remains within the client’s control and is not used for model training. Plus, setting up the infrastructure was faster than ever — minutes instead of days compared to older on-prem solutions.

From Ideas to Prototypes: Two Real Projects

We selected two project ideas from a larger pool of discussions and got to work. With full team involvement and the power of the cloud, we built two working prototypes that could easily transition into production.

1. “Ask Your Data” App

This application enables business users to query analytical data using natural language, removing the dependency on data analysts for routine questions like:

  • “What’s our churn rate?”
  • “Where can we cut IT costs?”

By connecting directly to existing data sources, the app delivers answers in seconds, democratizing access to insights.

2. Intelligent Internal Knowledge Access

Our second prototype tackled unstructured internal documentation — especially regulatory and policy content. We built a system that allows employees to ask free-form questions and get accurate responses from dense, often chaotic documentation.

This solution is built on two key pillars:

  • Automated Reporting: Eliminates the need for complex dashboard setups by using AI to interpret well-governed data and generate reports based on simple queries.
  • Regulatory Q&A: Helps users instantly find information buried in hundreds of pages of internal compliance or legal documents.

June 5, 2023
AI
View All
0
min
read

How Our AI System Fights Against Frauds in International Shipping

In the world of logistics, fraudulent and dangerous packages are one of the industry's biggest challenges. That's why a major multinational logistics company turned to BigHub for help in implementing a system for early detection. With a goal of deploying a solution for real-time evaluation of shipments as they enter the transportation network, our team at BigHub faced several challenges such as scaling the REST API and managing the ML lifecycle.

BigHub has a longstanding partnership with a major international logistics firm, during which it has successfully implemented a diverse range of data projects. These projects have encompassed a variety of areas, including data engineering, real-time data processing, cloud and machine learning-based applications, all of which have been designed and developed to enhance the logistics company's operations, including warehouse management, supply chain optimization and the transportation of thousands of packages globally on a daily basis.

 

In 2022, BigHub was presented with a new challenge: to aid in the implementation of a system for the early detection of suspicious fraudulent shipments entering the company's logistic network. Based on the client's pilot solution, which had been developed and tested using historical data, BigHub improved the algorithms and deployed them in a production environment for real-time evaluation of shipments as they entered the transportation network. The initial pilot solution was based on batch evaluation, but the requirement for our team was to create a REST API that could handle individual queries with a response time of less than 200 milliseconds. This API would be connected to the client's network, where further operations would be carried out on the data.

High-level Architecture

The proposed application is designed with a high-level architecture, as illustrated in the accompanying diagram. The core of the system is the REST API, which is connected to the client's network to receive and process queries. These queries are subject to validation and evaluation, with the results then returned to the end user. The data layer serves as the foundation for the calculations, as well as for the training of models and pre-processing of feature tables. The evaluation results are also stored in the data layer to facilitate the production of summary analyses in the reporting layer. The MLOps layer manages the lifecycle of the machine learning model, including training, validation, storage of metrics for each model version and making the current version of the model accessible via the REST API. To achieve this, the whole solution leverages a variety of modern data technologies, including Kubernetes, MLFlow, AirFlow, Teradata, Redis and Tableau.

 

During the development of the system our team needed to address several challenges that include:

  • Setup and scaling of the REST API to handle a high volume of queries (260 queries from 30 parallel resources per second) in real-time, ensuring it is ready for global deployment.
  • Optimizing the evaluation speed of individual queries, through the use of low-level programming techniques, to reduce the time from hundreds of milliseconds to tens of milliseconds.
  • Managing the machine learning model lifecycle, including automated retraining, deployment of new versions into API, monitoring of quality and notifications, to ensure reliable long-term performance.
  • Implementing modifications on the run - our agile approach ensured flexibility and allowed quick and successful changes to the ongoing project for the satisfaction of both parties and better results.

January 31, 2023
BigHub
View All
0
min
read

BigHub is a proud DATA mesh partner

BigHub has become a proud partner of #DATAMesh. This informal and regular meetup focuses on data and everything that happens around it. Data engineers, analysts, data scientists or Al startups founders will find their peers right there, in K7 club.

An informal regular meetup full of latest data gossip where you can attend talks from anybody who loves data - from legendary startup founders to junior data enthusiasts. We’re talking about the DATA Mesh meetup, of which we are a partner.

"We were looking for a place where we could meet informally to discuss and share experience from interesting projects with each other. That's why we decided to do regular data meetups," says Karel Šimánek, CEO of BigHub and one of the organizers and founders of DATA Mesh.

The meetup is held every month at the K7 club in Prague's Vršovice and visitors can enjoy short but inspiring presentations from various fields with people who have real experience with data and ML applications. Of course, there is also an afterparty, where you can make great contacts, and play the legendary Atomic Bomberman game for awesome prizes.

In total, there have been five meetups already with guests like Sara Polak, Jan Kučera from Datamole, Lukáš Jelínek from Pocket Virtuality or Jan Šindera from Nano energies.

June 16, 2022
BigHub
View All
0
min
read

BigHub scored in the Deloitte Technology Fast 50 CE 2021 competition

BigHub scored in the Deloitte Technology Fast 50 CE 2021 competition. With a total growth of 1 795%, we closed first top ten fastest-growing technology companies in Central Europe and we also made it to the Czech top 50, where we finished sixth! 

Tenth in Central Europe and sixth in the Czech Republic. These are the results of the Deloitte Technology Fast 50 CE 2021 competition, which compares the growth of registered technology companies over the previous four years, from 2017 to 2020. This year, 19 companies from the Czech Republic made it to the CE Top 50, top ten winners included FTMO, DoDo, Driveto, or DataSentics, and our BigHub! was one of them. We are very grateful to be ranked among such great companies.

With an overall growth of 1 795%, we close the top ten companies in CE. In addition, 139 local companies applied this year so it was possible to compile a ranking of 50 top Czech technology companies, where we ended up in sixth place. ”The experience from previous years shows that thanks to the Fast 50 programme, companies manage to find interesting opportunities, change their ways and keep growing,” says Jiří Sauer, CE Technology Fast 50 Programme Leader.

The dominance of the Czech tech scene was impossible to overlook. We have 6 representatives in the European top 10 and 19 domestic tech leaders in the European top 50. At the same time, this year's edition also recorded the highest growth figures, with the average number of companies registered growing by 2 278%.

The ceremony took place in the Archa Theatre and the evening was hosted by moderator Tomáš Studeník. All Czech and Central European companies that were awarded in the Deloitte Technology Fast 50 program, you can find on this page.

December 2, 2021
BigHub
View All
0
min
read

BigHub wins Readers' Choice Award for the best business story in the EY Entrepreneur of the Year competition

Meet our founders Tomáš Hubínek and Karel Šimánek, whose story won the Readers' Choice Award for the best business story in the EY Entrepreneur of the Year competition.

Tomáš Hubínek and Karel Šimánek founded BigHub, a technology company that specializes in applied AI. Their skillset and technology stack has multiple applications, from predicting employee turnover and detecting electricity theft.

They met while studying at the Czech Technical University in Prague. Both of them were working in consulting and banking firms before realizing the untapped potential of AI in business. It all led to the founding of BigHub in 2016, which has been profitable since its inception and is on track to reach a turnover of CZK 100 million.

Their story has garnered attention and recognition from readers of Mladé fronta and Idnes.cz. The readers awarded BigHub with the Readers' Choice Award for the Best Business Story in the EY Podnikatel roku competition, a globally renowned competition for entrepreneurs founded by EY in 1986.

Detection and Prediction by BigHub

BigHub's work spans a variety of industries, including energy, logistics, retail and human resources. BigHub's neural network technology helps detect unwanted vibrations in the turbines of the Temelin nuclear power plant and predicts equipment failures at fueling stations, or has been involved in the development and implementation of an early detection system for identifying fraudulent shipments entering the logistics network.

"This year's winners are already showing us the environment in which the human species will exist shortly. And they also represent the future of high-end IT businesses, where the Czechs have traditionally asserted themselves in the strongest global competition. I believe that such success awaits BigHub Tomáš Hubínek and Karel Šimánek as well, and I congratulate both gentlemen on their victory," said Jaroslav Plesl, Editor-in-Chief of MF DNES, to the winners of the Readers' Choice Award.

Another area where BigHub would like to help is healthcare, where artificial intelligence can be of help. "I like projects that have an impact and medicine is an area where AI can be very useful," admits Hubínek.

Constant monitoring of AI trends 

In addition to their work at BigHub, Tomáš and Karel are actively involved in various initiatives to support and promote the development of AI technology and talented students. They are both lecturers at the Czech Technical University in Prague and high schools and are dedicated to staying up-to-date with the latest AI trends. 

They also organize regular meetups for industry professionals across companies to network and share their expertise and participate in a data podcast, which provides insightful interviews on expert topics from the Czech and Slovak data scene. Furthermore, they support the Josef Hlávka Prize, which recognizes and rewards talented students.

March 24, 2003

Get your first consultation free

Want to discuss the details with us? Fill out the short form below. We’ll get in touch shortly to schedule your free, no-obligation consultation.

Trusted by 100 + businesses
Thank you! Your submission has been received.
Oops! Something went wrong.