Supply Chain planning

Making the most of storage facilities in large construction projects

Large construction projects have a huge logistics component often requiring equipment from all over the globe to be brought together through a complex supply chain. In an ideal world, our suppliers would produce items on time and they would be transported to our construction contractors just as they needed them.

Unfortunately, we don’t work in an ideal world – suppliers can fail to produce on time, problems can occur in transit, and external forces outside our control can interrupt the construction schedule. For this reason, we want a contingency plan – usually in the form of a storage facility where we can hold spare or excess items if our supply and demand schedules don’t match up.

These facilities are used heavily over a short period of time, meaning that any inefficiencies quickly add up to large, unnecessary costs. However, given the size of these projects, identifying and fixing these issues is far from easy or intuitive. It is through analytics that you can drive quantitative answers and support effective decisions in your logistics and facility planning.

Read more

Biarri FIFO Management

Grounding the complexity to Fly in Fly Out management

Being able to close the labour and skill gap is a critical factor in sustaining growth and maximising profitability for remote operations. It is imperative that companies have the tools and skills available to unravel the complexity to FIFO management.

FIFO workforces are commonly used by large infrastructure and resource projects in remote regions including rural and offshore. These regions often don’t have adequate infrastructure or an available local workforce with the right skillset which leads to companies requiring the use of workers from interstate and sometimes overseas.

The FIFO problem is complex for many companies. It involves determining efficient ways to move people via aircraft, taking into consideration: multiple projects at various phases over multiple locations, with a dynamic workforce utilising different skillsets on a variety of roster patterns, as well as using a fleet consisting of different types and numbers of aircraft.

Often the goal with FIFO management is to determine the number, and type, of aircraft needed in order to minimise cost whilst working with the opposing objectives of ensuring: the staff arrive before the start of their shift (but not too early), depart after the end of their shift (but not too late) and keeping travel durations to acceptable lengths (to ensure low fatigue).

Balancing FIFO Complexity

Analytics to break through the complexity

With this level of complexity, a traditional excel approach lacks the rigour and power to find the most efficient and effective results. As a result we’ve developed a number of different FIFO optimisers at Biarri to help ensure the best outcome for clients.

The reality is that there are often many more factors that need to be considered which complicates the problem further. Each FIFO optimisation problem often turns out to be quite different once the detail of the problem is better understood.

High Level FIFO Requirements

Some companies just want us to help them “define their fleet, or travel requirements” so they can then go out to tender (it also helps to keep the vendors honest), others actually want an operational tool. Others may be looking to see if there is a business case for upgrading an airport (e.g. if the airport is upgraded, then larger aircraft can be used which can reduce the need for bus in bus out (BIBO) which will alter their risk profile due to road km and can dramatically alter travel durations).

Specific FIFO requirements

Our clients often want different levels of detail in the solution. Some are happy with a solution that ensures adequate movements at the week level (e.g. 15 flights of aircraft type A between locations B and C per week), others want very detailed minute by minute schedules which take into account: turnaround time, time between takeoff and landing, number of aircraft gates with solutions showing exactly who is travelling on which flight and aircraft and when.

Across Multiple Projects

Our clients have also had multiple projects which are often on the go at the same time and sometimes different priorities are given to different projects. These priorities can be used to ensure that if all the people movement demands can’t be met, then the lower priority movements are less likely to be satisfied.

Optimising the time horizon

The optimisation time horizon can also vary significantly with some clients optimising over a 24 hour period (or even less if they want to re-optimise in the middle of the day due to unpredictable events such as delays due to weather) through to clients wanting higher level schedules over several years to help them make strategic decisions and determine how their fleet needs to change over time.

Understanding the constraints

Constraints such as: the maximum distance an aircraft can travel before needing to refuel, maintenance schedules and the refuelling locations themselves often also need to be considered. We’ve dealt with both fixed and rotary wing (helicopters) aircraft. Helicopters have the additional complication of sometimes having to take more fuel (and thus weight) to travel further, which results in the reduction of passengers because of the helicopter’s limited total payload capacity.

Finding the right FIFO parameters

We have outlined some of the parameters that our FIFO optimisers have considered. It is by no means comprehensive and we can always include new parameters if a different problem requires them but it gives a good understanding into the different variables that can, and should be considered.

Some of the typical inputs include:

  • Location
  • Hours of operation
  • Refuelling capability
  • Refuelling duration
  • Availability (i.e. you can specify a start and end date for which the airport is available)

  • Serial number
  • Category (e.g. fixed wing or rotary wing)
  • Type (e.g. DASH 8-200)
  • Average speed
  • Passenger seats
  • Maximum payload
  • Fuel density
  • Fuel tank capacity
  • Re-fuelling time
  • Fuel burn rate
  • Base location
  • Availability (i.e. you can specify a start and end date for which the aircraft is available)
  • Costs

  • From location
  • To Location
  • Distance
  • Aircraft types able to fly this leg

  • Origin
  • Destination
  • Project
  • Number of passengers
  • From Date
  • To Date
  • Arrive Before (i.e. must arrive on their first working day of the roster by this time)
  • Depart After (i.e. must depart after this time on the last working day of the roster)
  • Roster Pattern (e.g. 14:14 = 14 days on, 14 days off)
  • Day of week (i.e. which day of the week can this person travel)
  • Group (demands can be grouped together to allow the user to specify which demands can be grouped on the same aircraft)

Some of the typical outputs include:

  • Total flights
  • Total distance flown
  • Total fuel burned
  • Total number of aircraft required
  • Utilisation Percentage
  • Total unused pax capacity
  • Total passenger demand
  • Total passenger demand satisfied

  • Serial number
  • Date
  • Total pax
  • Total hours flown
  • Total distance flown
  • Total fuel burned
  • Total flights
  • Total legs
  • Cost

  • Flight ID
  • Resource ID
  • Pax capacity
  • Available pax capacity (this is < pax capacity if the fuel weight is a limiting factor)
  • Total used pax
  • Utilisation Percentage
  • Departure location
  • Departure date and time
  • Arrival location
  • Arrival date and time
  • Day of week
  • Total distance
  • Total hours flown
  • Total fuel burned
  • Fuel weight at start of leg
  • Refuel at destination (true or false)
  • Turn around time
  • Cost

  • Flight ID
  • Origin
  • Departure date and time
  • Destination
  • Arrival date and time
  • Project
  • Pax

  • Project name
  • Total demand
  • Total satisfied demand
  • Total unsatisfied demand (e.g. this will be non zero if there is not enough capacity to transport demand)
  • Total impossible to satisfy demand (e.g. this will be non zero if a flight path has not been specified in the inputs that results in some demand being impossible to satisfy regardless of aircraft resources available)

  • Flight ID
  • Number of instances (i.e. how many times is this flight route flown at the same time – but on different dates)
  • Resource
  • Date of first flight
  • Date of last flight
  • Day of week
  • Departure time
  • Arrival time
  • Total people
  • Total distance
  • Total hours flown
  • Total fuel burned

Unravel the complexity to FIFO Management

The work we have done for companies such as Arrow, Origin, QGC, BMA, IBS, and Santos has shown us that despite having FIFO problems, they all required different approaches in order to achieve the right result.

This has demonstrated to us that when approaching a FIFO problem, where so many different variables have to be considered depending on the client, a standard approach (Commercial off the shelf product) and excel models will generally struggle with the complexity.

Having a tool built around specific variables demonstrates the benefits to bespoke solutions for FIFO problems.

Find out more about Biarri in Mining >>
Find out more about Biarri in Oil & Gas >>
Find out more about Biarri and FIFO Scheduling >>

Or, Get in contact so we can discuss your requirements.

Driving efficiency in the Oil and Gas industry

Driving efficiency in the Oil and Gas industry

The oil and gas industry has been under severe pressure since late 2014 when oil prices dropped significantly. The highly volatile international market and oversupply of oil has meant companies have had to reduce costs in one way shape or form.

Bill Kroger, co-chair of law firm Baker Botts told Rigzone in an interview that, “Energy companies may need to lower their prices in response to a drop in demand …. For this reason, we may see CAPEX [capital expenditures] begin to decline until there is some stability with oil prices,”

This has been evident in Australia where many oil and gas companies have reduced capital spending significantly. However, with a lot of oil and gas projects shifting towards the operational phase, how can we make processes and decisions more efficient and effective?

Read more

Death by parameters

In my previous blog post I wrote about the great flexibility and power of Genetic Algorithms. So you may have thought; why do I need help with my optimisation problems? One can just simply grab an off-the-shelf Genetic Algorithm and use it. However, as with everything, there are always two sides to every story and this time I’ll show you why optimizing with Genetic Algorithms is much harder than it seems.

The flexibility of Genetic Algorithms arises, in part, from a flexibility to choose a dizzying number of parameters. When writing your own code you potentially have to decide on things such as the number of competing optimal solutions, the number of times to improve them, the mutation and crossover probabilities, the percentage of the population to eliminate in each generation and many more.

With so many choices, choosing the parameters correctly can determine whether the algorithm bears fruit or withers and dies. This difficulty has lead to many papers on the best way to choose parameters. Unfortunately even if one is able to choose good parameters for one problem, this is no guarantee that the same parameters will work for the next problem.

So over the years researchers have searched for other powerful optimisation techniques which don’t suffer from such a parameter overload. From this research we now have a number of promising algorithms. In particular, in 2009 Xin-she Ying and Suash Deb came up with the ultimate of all parameter starved algorithms, the Cuckoo Search Algorithm. In this algorithm there is one parameter. Yes only one.

A reed warbler feeding a young Cuckoo (from Wikipedia)

The Cuckoo Search Algorithm is inspired by the parasitic nature of some cuckoo species such as the European common cuckoo. These species lay their eggs in the nests of other host birds in an attempt to trick the host to raise their own nestlings. Sometimes this devious trick succeeds. When it doesn’t, the host bird either throws the egg over the side of the nest or simply abandons the nest altogether.

In the Cuckoo Search Algorithm the cuckoo’s ploy translates into an optimisation algorithm via four idealized rules which are repeated until the desired optimisation criteria are fulfilled. In the following algorithm each egg represents a solution and by a cuckoo laying an egg, we mean create a new random solution:

  1. Each cuckoo lays an egg in a random nest.
  2. Out of all laid eggs keep a number of the best eggs equal to the number of cuckoos.
  3. Abandon a fixed fraction of the worst eggs.
  4. Repeat

Find the parameter? The single, lonely parameter in the Cuckoo Search Algorithm is the fraction of the worst nests that are abandoned. This parameter affects how thorough the algorithm searches all possible solutions and so a lower value means the algorithm will find a local optimum faster (although maybe not a desired global optimum).

The avian-inspired algorithm has been used in numerous difficult problems to oust other optimisation methods out of their leadership position. For example, it has been used for spring and beam design problems , scheduling problems, the famous traveling salesman problem and even optimisation challenges in nanoelectronics! Like most other heuristic optimisation methods, the areas of application can be quite astounding.

So now that you’ve canceled the download of a promising Genetic Algorithm and started one of a new Cuckoo Search Algorithm, I thought I’d warn you again that there’s another side to this story too. Although the bird-based algorithm makes parameter choice simple, it may or may not be your best choice for a given optimisation problem. There are many heuristics for optimisation problems and choosing the right heuristic is probably much harder than choosing the right parameters for a given optimisation method. But you don’t have to worry about your precision nest eggs because luckily you’re on the website of a company competent enough to help you with this choice.

Biarri Optimisation - 21

One tech to rule them all

Back in high school when you were battling polynomials with derivatives, you learnt one of your first all-purpose optimisation techniques. Given a function, take its derivative, set it to zero and solve for all optima. Voila! After you had coerced the equation to give up all its maxima secrets, you could then be safe in the knowledge that you had found the optimal values.

If you enjoyed computing the optimal speed to jump your calculus hurdles, then you probably went on to university to learn more about other mathematical tools. Once there you most likely met your second all-purpose optimisation technique, Linear Programming. Given a simple linear function with linear constraints, you learnt that you could use any of a number of tools such as the Simplex Method to easily and quickly solve such problem.

Ant Colonay Optimisation in Action

Figure 1 Ant Colony Optimisation in Action

From there, if you had more of an applied orientation, you probably went on to learn about the exotic optimisation creatures. For example, Genetic Algorithms, Simulated Annealing as well as a host of other nature-inspired algorithms such as Ant Colony Optimisations, Swarm Optimisation, Artificial Bee Colony Optimisation techniques, etc. In the 1990s and 2000s, for almost any type of problem, it seemed that one could find a new nature-inspired optimisation technique to solve it better than previous more mainstream tools.

However, in spite of the swarm of new optimisation tools, most problems for which these new techniques were developed could be solved with existing tools. Although probably not as efficiently. Hence the question was which method was best? The answer to this question came in 1995 when two researchers, David Wolpert and William MacReady at the renowned Sante Fe Institute, proved a number of theorems collectively referred to as the “No free lunch” theorems. These results could be seen as implying that there is no one optimisation method that is best for all problems. In addition, when we average over all problems, we expect all methods to be equal.

This result has important and deep implications. It means that if you swear by a single generic optimisation method and try and use it to solve all your problems, then don’t expect it to perform better than your three year old son who guesses random solutions to all your different stochastic multi-objective constrained optimisation techniques.

Given this it would seem strange then that I am about to suggest the idea of solving numerous problems with a single optimisation technique. Besides the fact that a number of these problems don’t look like optimisation problems at all! The reason for doing this is to see the power of an interesting optimisation technique, as well as its flexibility and generality. In addition, one needs to bear in mind that the “No free lunch” theorem is a result about every imaginable optimisation problem, which is just a tad more than the few I will touch on here.

The ST5 antenna’s final optimised design

Figure 2 The ST5 antenna’s final optimised design

The class of optimisation technique that I want to discuss here is generally referred to as Genetic Algorithms. They have been successfully used on thousands of research and industrial problems and continue to amaze researchers with their potential to solve problems far beyond their original scope. For example, one of the most famous applications of Genetic Algorithms was by NASA in 2006 to develop a new antenna design for their ST5 antenna. It can be seen that the optimal design was anything but intuitive and most likely would not have been found by a “standard” optimisation technique based on initial human guesses.

So what type of problems would be considered inappropriate for Genetic Algorithms? First of all, you couldn’t do much worse than write some code to give to your son or daughter so that in their next algebra exam they can solve x + 5 = 6 with your tailored made Genetic Algorithm. They could just let your code chug away and sit there patiently while it aces the exam for them. Although not probably the most effective use of Genetic Algorithms, it is entirely possible.

So let’s take that thought one step further. What about solving the humble quadratic equation with a genetic algorithm? It has been done (and done againand again). But the quadratic equation belongs to pure mathematics right? In addition, it’s an equation you can solve directly isn’t it? Yes and yes but interestingly enough Genetic Algorithms have started to make their way into some of the purest area of mathematics to help solve problems that are stumping pure mathematicians, This is truly one area where you would not expect the tools of applied mathematicians to come to the rescue of pure mathematicians.

We have only scratched the surface of unexpected applications of Genetic Algorithms. In fact, they have made an appearance in almost every endeavour of research from physics to economics, from architecture to chemistry and even all the way back to their nature-inspired beginnings with numerous applications in biology. So in spite of our knowledge that there is no one method to solve all problems, Genetic Algorithms present us with a versatile and powerful tool that seems to have a lot more in store for our future problem solvers.

Read the second part here.

3 Elements of a successful fleet management system

MiningIQ recently published an article looking at how to make mining operations more efficient. As a more competitive market opens up and there is an increase in compliance and external forces (exchange rate, supply, demand, labour costs, etc.) these businesses are required to maximise productivity at a minimum operating cost all in order to increase profitability.

They found that one of the key business processes that large mining operations can deliver on is their fleet management. MiningIQ discovered that companies such as Glencore, BHP and ADG Mining are all rethinking their approach to fleet management and looking at the smarter way to improve their efficiency and bottom line.

Effective Maintenance Management

MiningiQ stated that Glencore is at the forefront of implementing optimisation into their maintenance management system. Their fleet is made up of millions of dollars’ worth of equipment and their loader rebuilds cost upwards of 1 million dollars.

Through the use of maintenance management optimisation they were able to save; in one circumstance 390,000 dollars.

Truck Haulage Continuous Improvement

They went on to say that Peter Knights, Chair of BHP Billiton Mitsubishi Alliance has even gone beyond using optimisation within the maintenance of his vehicle fleet. He has done this by looking at the most economical way to run his vehicles; through energy and fuel they were able to improve efficiency.

Streamlining The Communications system

MiningIQ’s report found that power, communications, GPS and general infrastructure were key barriers to business efficiencies.

Adam Gray, specialist consultant, mining systems at ADGMining said

“The improvements in truck utilisation were astounding –we’re talking in the realms of more than three per cent, which is a massive figure at the end of an NPV,”

In a lot of cases streamlining the communication systems is as simple as moving off basic excel based programs that don’t allow you to perform specific tasks.

How can I achieve these sort of savings with my fleet?

Biarri is an Australian owned and operated company that was established in 2008. We have worked in many industries but have proven capabilities within Rail, Mining and Oil and Gas. Working with companies such as Rio Tinto, QLD Cotton, Arrow Energy, Origin Energy and Boral (to name a few) and have been able to work across a range of not only Fleet Management problems but; FIFO scheduling, Manpower Planning, Vehicle Routing, Rolling Stock Operational Management, Vessel Load optimisation, and many more.

Biarri builds custom software for you, with you over the cloud. Using the power of mathematics we can provide you with accessible optimisation that anyone in your business can use.

If you want to find out more about how we can help you, Contact us!

Coal Train Crew Scheduling

An optimised approach to Coal Train Crew Scheduling

Rail is frequently used for moving coal between mines and ports, and interactions between train and crewing requirements can create highly complex problems.

Recently Matt Herbert, an optimisation consultant at Biarri Commercial Mathematics was invited to present an approach to Coal Train Crew Scheduling at the Queensland University of Technology, hosted by ASOR.

Matt provided insights into the problems many mining and rail companies face when scheduling their crews. His formulation considered many aspects of the real world problem, including restricting the number of crew changes on each service, and variable start times for crews.

This approach is able to produce weekly crew assignments with high utilisation in run times of around an hour, down from existing manual methods requiring a day or more.

Have a look at Matt’s Presentation

Biarri Group

Optimising With Big Data

Can optimisation save the world? Well it depends on the problem.

Optimisation may not be able to solve all of humanity’s problems but it has recently made important strides to help avert the impending global warming disaster. As I’m sure you’re aware, the global warming crisis is rated by scientists worldwide as being one of the most important issues of the twenty-first century. For decades now scientists have been warning us that we need to wean ourselves off our dependency on fossil fuels. Although replacements to fossil fuel-based energy have existed for over half a century, none of them seem to have made a significant impact on the global warming issue yet. For example wind power can currently produce electricity at a price that is close to fossil fuels based electricity. However, in spite of the impending crisis, wind power is still to enter our energy mix in a big way. One of the reasons for this is not simply because of lack of political will but because of a lack of good optimisation tools. Although, in the last five years it seems that the winds of change are blowing because some of wind power’s biggest opponents have gone silent. So what has happened?

To be able to appreciate what has happened and why optimisation is so important, we need to understand what the problem is and to be honest, the problem is wind. Unfortunately, like visits from your in-laws, wind does not come when we want it to nor does it behave like we want it to (imagine generating wind power during a cyclone!). So, due to its unpredictability, naively using wind power requires companies to have back-up power plant on standby just in case the wind suddenly stops. Back-up power plants are much less efficient than a permanently on power plant and counter-intuitively a situation like this can lead to more CO2 pollution than if we had just used a coal fired power plant!

What has happened in the recent past, which has turned wind energy from a much debated dream into a reality, is an optimal use of wind turbines to generate energy. Due to such optimal use of wind turbines, in Colorado, USA, the energy company Xcel Energy (which mailed its customers to oppose a meagre 10% renewable energy contribution), is now planning on profitably sourcing up to 30% of its energy from wind by 2030 (Reference). This is more than the famous 20% by 2020 proposed by Germany (which is now coming under fire due to high energy costs) and owes its success to better weather prediction and an optimal use of the wind turbines.

The case of optimising wind turbine electricity production is obviously a complicated affair given the fact that wind can change its direction and strength continuously throughout the whole day. So, in its simplest form, the challenge for those operating wind turbines is, given predictions about wind strength and direction in the next moment, how should the turbine’s parameters be adjusted to maximise electricity output. These parameters typically translate into physical properties of the wind turbine such as the direction the turbine is facing, the angle of the turbine blades, the speed and torque of the turbine drivetrain, and so on. Given this significant challenge, even if we could perfectly predict the wind conditions in the next moment, we probably would have no way of knowing the perfect parameter values for each given set of wind conditions. To find a solution to this problem, scientists have turned to a set of tools used in big data called machine learning tools to predict optimal parameters (and also to help predict the weather too!).

Machine learning tools are typically used when there is a large amount of data available to teach a system to behave in a certain optimal way. In our case, it is to teach the wind turbines to adjust their parameters optimally to maximize their electricity output given the predicted wind conditions. Typical examples of machine learning techniques include Bayesian Networks (used everywhere from biology to economics), Support Vector Machines (often for recognizing handwriting) and, probably most well-known, Artificial Neural Networks (which were used famously in 2012 by Google to pursue the noble cause of identifying pictures of cats in videos).

In the current case of optimising parameter selection for optimal electricity output of wind turbines, one promising tool to find the best choice of parameters is Artificial Neural Networks. For those working in the industry, this choice may seem natural, however, implementing a good neural network is wrought with difficulties. One needs to choose a correct model for the function to be optimised, given the available data one needs to choose a good learning algorithm to train the neural networks correctly and finally the computations need to be robust to new and unseen inputs. It is precisely in these fields that we have seen significant advances (and also stellar buyouts such as DeepMind for $450 million in 2014) which have led to more efficient algorithms for parameter selection. In turn this has led to the recent adoption of more and more wind turbines to generate renewable energy due to the more electricity produced by each turbine. In 2013 in Australia alone, plans were underway to install another eleven wind farms adding 1627 MW to the grid (Reference).

However, when talking about advances in wind turbine technology what constitutes a significant advance? Let’s take an example of a country that is not even a major player on the world scene but exemplary when it comes to wind energy generation, Spain. According to the Guardian, wind energy was Spain’s top source of electricity in 2013. It generated approximately 54,000 GWh of electricity in 2013 from wind turbines. So assuming that we are able to improve our optimisation methods so that electricity production increases by a mere 0.1%, then, in Spain alone, we would generate an extra 54 GWh of electricity. Given that 1 MWh supplies up to roughly 200 households, this would mean that simply by increasing output by 0.1% in Spain alone, we can supply two cities with Sydney’s population with the extra electricity!
Facts like this are what cause insignificant sounding headlines by Siemens and GE to be such game changers. Recently Siemens claims to have had a breakthrough in their neural network algorithms that enables their turbines to produce one percent more electricity annually under moderate wind conditions. Given our previous calculation we can see how important this is. Even more impressive than this, given what a one percent difference can do, was GE’s headline that they are able to increase their wind turbine output by five percent (reference)! They haven’t stated specifically how they did it but their description of “big data” and “predictive algorithms” would lead one to guess at similar machine learning algorithms as for the Siemens success story.
As we can see all these small gains can add up and as the reader can guess, without the optimisation tools available today these feats would not have been possible. We can only hope that these improvements point to an industry that is poised to revolutionise our energy industry and, with the help of optimisation, maybe save the world.

Dynamic scheduling in the face of uncertainty

Many businesses operate in uncertain environments, at the mercy of breakdowns, human error, traffic or the weather. The traditional approach is to build slack into a schedule, to survive some expected level of uncertainty, and then to hope this schedule can be executed in spite of whatever unforeseen events actually occur. In instances where the schedule cannot be implemented as expected, operational decisions are made to try to meet the business goals as much as possible. Under complexity and time pressure, decision makers must often resort to making short term, locally focused decisions without the time or tools to consider implications downstream.

In this blog post, optimisation consultant Dave Scerri describes how recent algorithmic advances and smart tools have enabled the best of both worlds: an operational scheduling approach that responds to changing circumstances while focusing on system wide efficiency.

Dealing with operational uncertainty is one of the biggest challenges facing many businesses today. The most well thought out plans and schedules rarely survive contact with the real world, and the consequences are often a loss of productivity and overall efficiency. Businesses should be endeavouring to make the best decision at any point in time given the current circumstances and the best information available to them. Operational scheduling, when done right, can allow businesses to make quality, timely decisions, maximising their productivity in the face of uncertainty.

The best laid plans of mice and men often go awry

– Robert Burns

Approximate Dynamic Programming

The Challenge

Biarri’s customers are regularly faced with the problem of making decisions “here and now” which will have consequences for the future.  To make life harder, there is often uncertainty about what will happen in the future.  Historically, there have been two main approaches to this sort of problem:
  • Optimise for “here and now”, and let the future take care of itself – the myopic policy.
  • Apply some simple rule of thumb – a very common policy
Many of these problems are resource allocation problems.  What we would like to know is this:  if we optimise for “here and now”, what value should we give to left over resources?  In other words, how can we try to take care of the future when optimising.
If we can assign reasonable estimated values to left over resources then we can make better decisions.  Approximate dynamic programming is a way of doing this.  The values of left over resources are known as “value function approximations”.
A couple of examples will help.  These are taken from “Approximate Dynamic Programming: Solving the Curses of Dimensionality” by Warren B. Powell.

Blood Bank

A blood bank needs to decide each day how to allocate its stocks of blood to operations.  Operations are either critical (life saving) or elective.  There are rules for substitution of blood types.  The aim is to maximise the “value” of the operations performed, where critical operations are much more valuable.  Each day new (random) blood supplies arrive.  Each day, there is a new (random) list of demands for blood for operations.
A myopic policy would just assign the blood to operations in the best way possible for day.  This might mean that some elective operations are done today using a rare blood type, leading to some critical operations not being done the next day.  Approximate dynamic programming places a value on blood of each type – at different levels of storage – so that enough blood is kept back to meet critical demand the next day.

Long Distance Trucking

A long distance trucking company needs to assign drivers, with their prime movers, to requests to haul trailers.  Random requests arrive every day and the assignments are made every day.  A driver needs to return to their home base for a couple of days off at regular intervals.  A myopic policy would just assign the “cheapest” driver to each job.  Maybe the driver that can get to the start of the job quickest.  Even a myopic policy would consider if the driver would be able to get home in time after the job.
Approximate dynamic programming would place a value on a driver from city X being in city Y with Z days left until they need to return to their home base.  This would be some measure of the chance they would be able to work their way home, rather than just dead heading.  When making the assignment of drivers to jobs, we can consider not just the cost of assigning the drivers to the jobs, but an estimate of the value of the driver when he has finished the job.

So how does it work?

The long answer is in the book by Powell, but briefly…  Approximate dynamic programming uses a combination of simulation and optimisation to iteratively calculate the value function approximations – the value of left over resources.  Once this is done, operational decisions are taken using an optimisation procedure that combines the value function approximations and the short term costs.
The actual operational outcomes can even be used to update the value function approximations.  This means that if the underlying statistical model is different from what was used in the simulation process, the value function approximations will gradually get better.
Of course there is a lot of “devil in the detail”.  Typically the number of possible combinations of left over resources is huge, so we need tricks to estimate a value for any particular combination that may occur.  Again, see Powell for more details.

What data do I need?

  • The same type of cost and rule data you need to run your operations anyway.
  • Enough historic information on demand/supply so that a reasonable simulation model can be built.
Approximate Dynamic Programming is still in its infancy as a tool, but it promises a way to close the gap between moderate sized, deterministic applications that can be solved to optimality and larger, random applications that have previously only been addressed by simulation.