The Data Science Challenges Behind Dynamic Pricing

PricingHUB
10 min readDec 10, 2020

--

This article is the first of a series of 5 where I’ll share PricingHUB’s experience of the main challenges that we have encountered when building a dynamic pricing solution for e-commerce retailers.

Each article will cover one of the following topics:

  1. How to select a dynamic pricing approach (the focus of this article)
  2. How to manage cross-elasticity and whether SKU-level price differentiation is worthwhile
  3. How to deal with conversion rate seasonality
  4. How to segment customers
  5. What really is the optimal price?

How to Select a Dynamic Pricing Approach

Dynamism is a concept that describes a system that changes over time, but such changes depend on the observer’s perception. That’s why there are several ways of building a dynamic pricing solution. In this article I will describe the dynamic pricing approaches that I consider most relevant.

Rule-based Pricing

In these pricing systems, rules define the price of the product. Two elements are important here: granularity and baseline. Granularity refers to the product or end-consumer attributes that define the perimeter where the rule applies. The rule can set a price either by directly defining the price or by specifying the increment (or reduction) to be applied to a reference price, or the baseline.

Granularity based on product and end-consumer attributes
Rule-based price engines combining multiple product and client attributes are able to produce a deep granularity, which makes the pricing strategy almost untraceable for competitors. For example, in the travel industry pricing could be based on an almost endless number of product attributes such as one way/round trip, origin, destination, length of journey, airline, fare category, etc. Pricing could also be based on client attributes such as number of passengers, day of the week the booking was made, the OS used, the device, geolocation, time of day the customer decided to purchase, etc. The level of granularity can become so fine that the system can come close to personalised pricing which is perceived as unfair and, because of that, personalised pricing is also sometimes referred to as “discriminatory pricing”. An especially well-known example is the story about Amazon and its supposedly personalised pricing for DVDs (David Streitfeld, Washington Post, September 2000).

These rule-based systems can be significantly enhanced through machine learning, more specifically by propensity models. These can help to better differentiate clients by studying their propensity to purchase or to include a delivery service, but also their expected basket value or expected lifetime value, for example. Rules can be set up so that they use these propensity parameters.

It’s important to note that granular pricing has been associated in the past with the name “dynamic pricing” whereas in fact the pricing policy is fully static. A price engine user may set up an end-consumer-based rule, for which the price of an item changes according to the time of the day (same for weekdays) that the end-consumer decides to buy. The mere fact that different customers arrive at different times gives the impression of prices evolving over time.

Baseline selection
In rule-based price engines that set prices by specifying an increment on top of a reference price, different baselines can be used, for example the provider price, the break-even price, last year’s price, or the competition price. I’ll focus on this last one as this is historically the most commonly used approach in the retail industry. These rules fall into the category of “If company A offers SKU X at a price P_A, then my price (my_P) will be 10 € cheaper”:

my_P = P_A -10€. (eq.1)

The dynamism here arises because the rules involve an external signal that may evolve with time: the competitor’s price. The pricing policy, represented by equation 1, is static but it is the environment or context that changes (i.e. the value of the variable P_A in the equation).

Despite the widespread competitor-rule-based pricing logic I personally notice a major drawback: competitors might not share the same business goals as your company. Your company’s performance goals may shift from volume to profit several times over the year depending on negotiations with suppliers, closeness to the end of the fiscal year, and many other situations. Pricing is a very important lever to help achieve both top-line and bottom-line goals. So, why should you align with your competitor’s price strategy and compromise such an important lever, especially since competitors might be striving for completely different goals? Experience tells me that companies will argue that if they don’t then they wouldn’t get the sales. But experience also tells me that competitor comparison is not as important as it may look from first glance. Brand value, quality of service, extra services, UX, etc … also matter. Plus, end customers don’t have endless time to keep comparing.

In favour of rule-based pricing systems is that users feel empowered as there is no black box effect. On the other hand, the rule system can become very complex by linking several “IF” conditions and, as time passes, the intelligence behind a multitude of rules can become obscured because of the more than probable overlap between such rules.

Yet in my view the major drawback of a rule-based system is the difficulty in translating business targets into rules. To adapt to changing company objectives (top-line or bottom-line) or in context (such as new competitors), the pricing engine user needs to engage in a trial-and-error approach to check if new rules would be a better match for the new conditions. To check the appropriateness of the new rule, AB-testing or pre-port analysis can be used, but these are time-consuming processes that imply an experimental design, waiting to gather enough data to reach statistical significance, repeating the experiment until the results are satisfactory, and finally deployment. Clearly it’s not agile.

Target-based Pricing

Here we’re referring to methodologies that estimate or calculate how some KPIs relate to price, and therefore, given a targeted KPI, the price engine is able to select the pricing approach that maximises the probabilities of achieving the target. Goals can become complicated when introducing constraints like “maximise profit but don’t damage volume beyond 5%”. To understand how the KPI relates to price, it’s possible to either directly model the KPI, or use a two-step process, modelling KPI/volume and volume separately:

KPI($,C)= (KPI/volume)($,C) * Volume ($, C). (eq.2)

Where $ represents the price and C represents all the contextual variables. Note that modelling volume($, C) essentially determines the price-response function, which, in turn, carries the same information as the price elasticity. The KPI/volume ratio is obtained relatively simply when the optimisation works at SKU level, but gets a bit trickier when it refers to a group of SKUs. For example, how the gross sales of an SKU changes as a consequence of the price is straightforward, but how the average gross sales of a group of SKUs changes with price is a mix of two effects: i) the change of gross sales of each SKU and ii) the product mix, which may evolve with the price. Hence, optimising can be reduced to achieve a good understanding of the price-response function (or equivalently to the price elasticity) and of the product-mix change in response to a price change. Among this type of price engines I differentiate the following:

Historical data-based pricing
This methodology uses historical data to find patterns between historical KPI performance and historical prices. Once such patterns are defined, it’s possible to deduce what would happen if a given price configuration was simulated. In normal situations, to start using these price engines a large amount of historical data is needed because to learn requires exposure to different price configurations. But the company probably hadn’t experienced dynamic pricing before, so prices were fixed for long periods of time. Therefore to obtain a data sample with sufficient price variance to learn requires going very far back in time. Then, the first question arises; has the data been stored in a homogenous way for such a long time period? The second question is, in order to apply to the future a learning taken from the past, the past has to represent the future sufficiently well. This might not hold up in an ever-changing environment and even when it does hold up this condition turns into a constraint: if I need to keep past learnings valid to be applied to the future, this implies I need to stick with the historical pricing approach. However introducing something new like a dynamic pricing engine changes the pricing conditions which threatens this continuity. Yet transforming their approach to pricing may be why a company ventured towards a dynamic pricing engine in the first place. A further question is are we able to isolate the effects of price changes using historical data? A typical mistake is to use historical promotion prices to estimate the elasticity of a product. Elasticity relates to volume changes resulting from price, however during a promotion, many other factors are at play. This includes the psychological effect of slashing prices, marketing campaigns, and the time of year that led you to start such a promotion.

Experimentation-based pricing
One way to achieve a good understanding of how KPIs behave towards price is through continuous experimentation, that is, constant price movements and subsequent measurements of KPI performance. The fact that it is continuous allows us to keep track of the permanent changes in the KPI-price relationships and to eliminate the assumption that the (distant) past must accurately represent the future. The tricky question here is to find a balance between price exploration (experiments to better understand the relationships but not optimise) and price exploitation (price movements pursuing pure optimisation purposes).

Reinforcement Learning (RL) models are especially suitable for experimentation-based pricing. This is because they are based on continuous exploration and exploitation, but also because the rewards of an action (i.e. price decisions) can be defined in a way so they are aligned with the company’s business goals. On the contrary, one problem with RL models is that they might take a long time to learn, which makes them discouraging to implement. Some examples of how pure RL models are applied in pricing are “Multi–Armed Bandit for Pricing”, Francesco Trovo et al., 2015; and “Thompson Sampling for Dynamic Pricing”, Ravi Ganti et al.; 2018.

Another problem with experimentation-based pricing models is that a degree of pricing strategy simplification is needed. Without this there are so many ways of setting up prices that achieving relevant coverage of all possible pricing setups during the exploration periods would be impossible. In that case, the spectrum of possible price setups needs to be reduced, thus possibly limiting the full potential of pricing. This topic will be covered in brief in the next chapter of this series of articles.

It’s important to note that in the experimentation-based approach, dynamism can be three-fold: as an adaptation to an environmental change (for example, measuring higher demand than expected), as the consequence of customer segmentation (this is identical to the granular dynamism explained before, for example, differentiating between weekday and weekend buyers), and as an effect of increased understanding as models become smarter thanks to continuous experimentation.

Note that target-based pricing engines rely much less on the expertise of the user than the rule-based pricing itself. A target-based pricing user still has an influence on the selection of elasticity models, forecast models and their assumptions supporting them, however price adaptations are automated. On the other hand, in rule-based pricing, in the pursuit of optimisation the user needs to actively create a new rule or modify an existing one and hope that the change achieves the desired effect. Therefore, target-based pricing engines avoid incorrect preconceived ideas and gain in agility.

Conclusion

Building a dynamic pricing algorithm can be achieved in multiple ways. The choice of one approach versus another might be based on the answers to such questions as:

From a performance perspective, the different methodologies are tricky to compare because: 1) A company rarely has two dynamic pricing solutions co-existing, 2) Users’ expertise matters, 3) Suitability between the approach and the type of company/industry plays an important role.

However, based on my 6 years of experience, I tend to believe that experimentation-based pricing is better placed than all other approaches for creating value. It’s arguable that exploration periods result in missed opportunities. Furthermore, since experimentation-based pricing needs a simplified pricing strategy it can never achieve the granularity of some rule-based pricing. With this is mind, why is it that I believe more in the capabilities of the experimentation-based approach? Because we live in an ever-changing world, its adaptability to new realities is superior to the one of rule-based pricing and historic pricing thanks to the continuous experimentation, and its efficiency to adapt to new company goals is far better than rule-based pricing. In today’s world such agility is, in my opinion, capable of overcoming the other disadvantages. In the end, if you’re looking for a dynamic pricing solution, you’re looking for dynamism…you’re looking for quickness…and you’re looking for agility.

In the next article we’ll discuss how price differentiation at SKU level complicates price optimisation and the measures that can simplify it so experimentation-based pricing can be applied.

Juan Manuel Mayén Gijón
Head of Data Science at PricingHUB, PhD

--

--

PricingHUB
PricingHUB

Written by PricingHUB

PricingHUB is a Dynamic Pricing SaaS Platform designed to help retailer and e-commerce optimize their profitability and protect margins.