Datart, a member of the HP TRONIC group portfolio, is one of the leading players in the retail sale of consumer electronics in the Czech Republic and Slovakia. Whether in brick-and-mortar stores or sales through e‑shops, Datart has been with us for 32 years. It is the recipient of the Mastercard Merchant of the Year 2021 awards in the electrical engineering category, and the Most Trusted Brand 2020 and 2021. In 2022, the HP TRONIC group, including DATART, increased its turnover by 5% to CZK 25.3 billion. EBITDA profit of CZK 942 million.
In this article you will learn that:
- there is always a reason for testing
- the deployment of the test must be perfect
- an A/B test can be a great start
- it is possible to work for significant results in a relatively short time
- occasional testing can surprise
Was it the analytics, curiosity, or intuition at the beginning?
Several high-profile clients started with Luigi’s Box in the same place as Datart. In the beginning, they deployed Luigi’s Box Analytics on their e‑shop. Since the analytics has already provided much information, the question was whether Luigi’s Box could be an equally reliable partner in search or autocomplete when using Luigi’s Box Search as it was with the analytics.
Datart realizes how important search is for its e‑shops and what can affect its correct or incorrect setting. Aside from costs, there are also other important metrics, such as generated revenue, conversion rate, number of completed purchases. A responsible approach certainly includes testing and verifying hypotheses. Datart innovates, which can be seen in several areas. It does its best to improve the user experience, and understandably, they are looking for better solutions that help users and increase profit simultaneously.
They were developing their search internally, and at some point, it was necessary to verify that external search could be as good, if not better, than internal.
Deploying a test is not a few clicks away
Whatever we want to test, it is crucial to have everything thought out in advance. For example, methodology, compared values, the feasibility of the test, or evaluation method. On the other hand, technical implementation, difficulty, and requirements for developers or any technical specializations or specifications. The test can run on both frontend and backend integration. In the case of Datart, it was the more complicated and complex backend integration. Most clients are professionally and personally prepared for similar testing, so dealing with such tasks usually presents no complications.
Thankfully, Datart has a team of specialists who did not underestimate any aspects, provided step-by-step information and specifications, and adjusted the necessary values to communicate with our system through the API correctly.
Even before running an A/B test, a set of tests is needed to eliminate possible errors that could significantly affect the outcome of such a test.
Why an A/B test?
There are many ways to try or test something. The first option is to replace one solution with another and examine whether the new solution is more effective. However, a fundamental risk arises here. With the size of a client such as Datart, even a slight deviation in low percentage units can manifest as a significant drop in profits, which is unacceptable for any company.
But a safer option is first to use the so-called offline A/B test. Existing search results, for example, over a while, are compared with the results offered by the tested tool. We have developed this testing method and have been using it for a long time, thanks to which we fundamentally eliminate the risk that a new search for some phrases would display worse or no results. This makes testing safer even for more prominent clients, where given the number of searches for individual phrases, even a slight deviation can have a significant impact.
After an offline A/B test, clients smoothly move to a “live” online A/B test. The basis is a methodology in which some visitors are assigned one version and others another. The display ratio of one and the other solution is chosen, which can be 50:50. Still, in the beginning, for example, only 80:20, so that the client can make sure that the tested variant is better or the same as the original variant. In the end, both results are relatively compared, and if the input values, for example, the number of sessions, were significant, we can proceed to the evaluation.
The result in less than two weeks
If we were to test particular cases, or if we had an e‑shop with low traffic, the test would take too long because the low number of visits would not be significant enough. It would simply not be possible to determine which of the compared solutions is the winner of the test.
Some clients test for a week, others for months. Considering the size of the traffic and the number of conversions, we can estimate how long the test will take. Subsequently, we will check the result through statistical significance to reduce the risk of an unexpected result. Everything is always defined in advance. So that the beginning and end of the test, as well as the compared metrics and the method of evaluation, are clear. The rules mustn’t be changed “during the game.”
In this case, we had enough input data, good timing, and precise evaluation of the test. The test in Datart took place from 1 to 12 September 2022. In less than two weeks, the necessary data was collected – and it was possible to evaluate the test and determine the winner.
Luigi’s Box as an A/B test winner
The comparative A/B test was deployed on the Czech and Slovak versions of the e‑shop. Both tests differ in some values, as each market in each country works differently when searching.
On both tested e‑shops, the deployment ratio was 50:50. Where 50% of users got the original version of the search, and 50% got the version with Luigi’s Box Search.
The results of both tests were significant according to the chosen methodology.
The main comparative metric was “Conversion rate” – conversion rate from search users.
After evaluating the tests, we found that, compared to the original search version, Luigi’s Box Search was demonstrably better on both language versions of the e‑shop.
Another interesting piece of information is that our solution eliminated incorrect results and searches without results for some categories. Luigi’s Box can be a reliable answer if you are solving a similar problem with relevance.
Conclusion
The chosen metrics demonstrated that the artificial intelligence and machine learning used by Luigi’s Box could increase the conversion rate, in this case, by 14-19%. Depending on the segment and search status, the difference in conversion rate can be even higher. Along with the increase in the conversion rate, of course, sales also increased – which is the desired state. After such test results, it is unsurprising that Datart will use Luigi’s Box Search as its search and autocomplete tool on both tested e‑shops.
This example clearly shows that our tool’s years of research and development have a high added value for the client.