Optimisation: Striking the Right Balance

One of the guiding principles we use in commercial mathematics is to “Model Conservatively, but Optimise Aggressively”. This means that the problem domain should be modeled with sufficient “fat” in the data to ensure that the results are both legal and robust; but given this, we should then seek to apply the best (fastest and highest quality) solution approach that we can get our hands on.

Optimising aggressively can sometimes have it’s downfalls, though, if taken too literally. I’ve been doing a few experiments with numerical weightings of the objective function in a Vehicle Routing problem, where this issue is readily apparent. (Actually it is a Vehicle Routing problem with time windows, heterogeneous fleet, travel times with peak hours, both volume and weight capacities, and various other side constraints).

Our Vehicle Routing uses travel times (based on shortest paths through the street network) that are characterised by distance and duration. Durations can vary due to different road speeds on different types of streets (highways vs suburban roads for example). This leads to the question of how (on what basis) to optimise the vehicle routes – given that the optimisation has already to some extent minimised the number of vehicles and created well-clustered routes – what is the most desirable outcome for KPIs in terms of duration and distance?

In one experiment I’ve tried three different weightings for the duration (cost per hour) while keeping the cost per distance constant. I’ve run three values for this cost per hour – low, medium, and high weightings – on real-life delivery problems across two different Australian metropolitan regions.

Region 1
Total Duration Driving Duration Distance
Cost/hour
Low 74:47 24:38 708
Medium 72:45 23:55 712
High 72:58 23:42 768
Region 2
Total Duration Driving Duration Distance
Cost/hour
Low 113:54 46:44 1465
Medium 107:51 41:36 1479
High 108:51 43:49 1518

From these results, there is a (more-or-less) general correspondence between distance and the driver cost per hour as you would expect. However, if you push one weighting too far (ie. optimise too aggressively or naively), it will sometimes be to the detriment of all the KPIs as the optimisation will be pushing too strongly in one direction (perhaps it is outside the parameter space for which it was originally tuned, or perhaps it pushes the metaheuristic into search-space regions which are more difficult to escape from). This is most acutely seen in Region 2 when using the high cost per hour value. Conversely if you drop the cost per hour to a low value, the (very modest) reduction you get in distance is very badly paid for in terms of much longer durations. What is most likely happening in this case is that the routes are including much more waiting time (waiting at delivery points for the time windows to “open”), in order to avoid even a short trip (incurring distance) to a nearby delivery point that could be done instead of waiting.

The problem of striking the right balance is most acute with metaheuristics which can only really be tuned and investigated by being run many times across multiple data sets, in order to get a feel for how the solution “cost curve” looks in response to different input weightings. In our example, an in-between value for cost per hour seems to strike the best balance to produce the overall most desirable KPI outcome.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply