Posts

Our philosophy is to make the power of mathematics accessible. Why? Because we think it isn’t currently very accessible, this limits the number of people who can use it to get value and reduces the value derived by those who do use it, and that’s a crying shame in a world that desperately needs efficiency. We have all seen it multiple times in multiple organisations. It’s the hard to use (probably ugly), not really fit for purpose (lots of workarounds), complicated IT (n tier, client/server, VM, Citrix, Oracle thing) approach to providing optimisation software.

COTS vs Bespoke

COTS (commercial off the shelf) puts the bars around accessible Mathematics: leads to crying babies

Our philosophy is to make the power of mathematics accessible. Why? Because we think it isn’t currently very accessible, this limits the number of people who can use it to get value and reduces the value derived by those who do use it, and that’s a crying shame in a world that desperately needs efficiency.

We have all seen it multiple times in multiple organisations. It’s the hard to use (probably ugly), not really fit for purpose (lots of workarounds), complicated IT (n tier, client/server, VM, Citrix, Oracle thing) approach to providing optimisation software.

So how did it come into being? Here’s how I see it:

“I’m unique; give me your shrink wrapped product!” – and other amusing procurement stories

Let’s assume requirements are done, I’ll save organisational scope bloat for another time. The next question is build or buy? How will we best get something that is a close match to need/requirements?

So a market search ensues only to discover that the requirements are pretty unique. So a custom/bespoke solution is required! That makes sense but most organisations quickly discover that bespoke = expensive (time and money), just like buying a tailor made suit is more expensive than buying off the rack.

It’s for this reason that hard core mathematics/optimisation solutions have mainly been consumed by capital intensive industries where spending a few million to save tens or hundreds of millions made the business case stack up.

Therefore organisations often seek a COTS (Commercial Off The Shelf) solution (often after an expensive run in with a bespoke approach), with the expectation that if they specify what they need and buy something “off the shelf” that fits then it should be low risk (time and money). It appears to be quite an entrenched view with Australian CIOs, and in some cases is justified, particularly in back office functions that don’t offer opportunity for differentiation. A point Wesfarmers Insurance CIO David Hackshall and DoD CIO Peter Lawrence make in an article by Brian Corrigan on itnews.com.au titled “How COTS became Australia’s default software setting”.

In the world of mathematics, optimisation and advanced planning and scheduling it would be a very rare occasion with a simple set of generic requirements where COTS really worked. Take one of the classical problems where mathematics are applied, vehicle routing. This is a well picked over area and sounds simple enough. Nonetheless, vendors fill niches within this niche in order to provide a match to requirements. As the Vehicle Routing survey in February 2014 issue OR/MS Today says “VR customers are different, and so are their routing needs and problems, which require flexible, innovative answers”.

Vendors react to this COTS centric procurement environment in a predictable way, and of course say they sell COTS because otherwise when they get evaluated on the inevitable RFX criteria they would fail miserably. The solution? They will (and I’ve been there) include “configuration” or “installation services” as ways to mask software development. The result? You get something that wasn’t a great fit with lots of add on development to meet your requirements. It’s hard to use, slow and doesn’t really provide the solutions you were hoping for. In many cases you end up with the worst of both worlds, the cost of bespoke but the poor fit of COTS.

As the aforementioned itnews.com.au article says “The middle ground between buying readymade software and building bespoke solutions is to customise a COTS package. Yet as many CIOs have discovered at great cost to their budgets and mental health, this can be a painful experience.”

This COTS/bespoke paradox is the problem we saw and it is what we aim to address. So what does Biarri do differently? We take the benefits of bespoke and make it cheaply and quickly. You could say we aim to provide the best of both worlds.

Do the math

How do we do it? First of all, we do the maths first! Prove you can solve the underlying problem and that’s it is worth solving before investing in the delivery mechanism. Once you know there is value in the maths, make sure people can digest it via a well-designed solution. The Biarri Workbench is our SaaS platform that allows us to very quickly develop easy to use, custom applications with unique workflows with an iterative/agile and light deployment.

Who says B2C owns good UX?

Easy to use means designed with the user in mind. In the consumer world (B2C) this is the natural order of things (thanks Apple). In the business world (B2B) this has taken a back seat, and that’s where our industrial designers come in. Working with users to really understand how they do their job and will interact with the system. Producing mock-ups/concepts and getting early feedback before a line of code is written.

So now we’ve proven the maths will provide value and designed a solution that users will love to use.

Mock up example

Our philosophy is to make the power of mathematics accessible. Why? Because we think it isn’t currently very accessible, this limits the number of people who can use it to get value and reduces the value derived by those who do use it, and that’s a crying shame in a world that desperately needs efficiency. We have all seen it multiple times in multiple organisations. It’s the hard to use (probably ugly), not really fit for purpose (lots of workarounds), complicated IT (n tier, client/server, VM, Citrix, Oracle thing) approach to providing optimisation software.

Rinse and Repeat

What comes next is turning this into reality quickly, cheaply and iteratively. Quickly and cheaply are thanks to the Biarri Workbench providing security, common database, existing UI components, libraries and widgets that enable a custom built application to be constructed very quickly. And “iteratively” is thanks to being web delivered which means we can provide early access to users to start providing feedback. Agile development takes on real meaning as users see the mock-ups they helped design come alive in their web browser mere weeks (or even just days) after designing them. Engagement and user buy-in are huge as feedback is provided, incorporated and delivered instantly. Australia Posts CIO Andrew Walduck understands this approach, “The number of times I’ve seen operating models where you start with requirements on one side, you dump it into operations on the other, and it fundamentally misses the point”.

Tool UI Example

Example of tool UI

It takes different strokes to move the world… yes it does

Do you remember the late 70’s early 80’s TV series “Different Strokes”? I use to love the theme song.

Everybody’s got a special kind of story
Everybody finds a way to shine,
It don’t matter that you got not alot
So what,
They’ll have theirs, and you’ll have yours, and I’ll have mine.
And together we’ll be fine….

When you start looking for your next optimisation, analytics or advanced planning and scheduling solution and your CIO/CFO says “budgets are tight and you can’t buy bespoke, you have to go COTS”, remember “it don’t matter that you got not a lot… you’ll have yours” because Biarri has a special kind of story.

The Future of Research in Operations Research

New technologies, particularly Internet-enabled technologies like cloud computing and social collaboration tools, are changing how mathematics is done.  This post showcases some of these emerging trends and lays out a vision for how the research and practice of Operations Research in particular could be much more effective.

Consider one of the most famous mathematical theorems of all: Fermat’s Last Theorem.  This theorem can be easily stated and readily understood, but has been truly devilish to prove, so much so that it lay unsolved for 360 years.  Finally in 1993 Andrew Wiles provided a 108-page proof after 7 years of effort.  This is a very “traditional” way of doing mathematics (albeit, pure mathematics), where a “lone wolf” makes a large single contribution; but it also hints at the limits of what one brain can achieve.

By contrast, there is an equally old conjecture, from 1611, that has also only recently been proven.  This is Kepler’s Conjecture, which states that the most efficient way to stack spheres has 74.048% density.  An example of this “packing” is hexagonal close packing, which is the way you typically see oranges stacked in a shop.  In 1998 a proof emerged for Kepler’s Conjecture which required the solution of about 100,000 Linear Programming problems by computer.  This shows the emergence of a different model for “doing maths”, that is applicable even for purposes of obtaining mathematical proofs.

It is no surprise that computers have become indispensable for mathematics, particularly in applied mathematics in general, and in Operations Research in particular.  What is more interesting is the emerging role of other new technologies on how mathematics might be done.

These trends include:

•    More data (e.g. availability of spatial data via Google Earth; the prevalence of data mining and “big data” analytics), faster machines (the well-documented exponential growth of computing power), and more machines (we are rapidly approaching the age of ubiquitous computing).
•    Internet-enabled technologies such as mashups, Software as a Service, open APIs, cloud computing, web 2.0, and social network integration.  Browsers are prevalent and using web and mobile apps instead of desktop apps – even for complex tasks like image editing – is now commonplace.  These newer trends all lay on top of the benefits and behaviours we have already become accustomed to from the Internet, such as: increasing market efficiencies (e.g. sectors that approach perfect information, such as hotel and flight bookings, and connecting businesses more directly to customers and suppliers), the commoditising of products (e.g. the so-called “race to the bottom” for pricing), the “long tail” (disproportionately favouring small businesses through the power of search e.g. niche goods and services); hypercompetition (where it becomes more difficult to monopolise by keeping customers ignorant of alternatives); and the abolition of geography (the so-called “global village”).
•    Competition, crowd sourcing and collaboration – e.g. log-in style tools which let people from anywhere collaborate together; there are many examples of competitive crowd-sourcing mentioned below.
•    Not to mention the remarkable explosion of knowledge enabled through Wikipedia, Google, etc.

There have been some astonishing examples of how these trends have been put to use, both in academic and commercial contexts:

•    SETI@Home – a large-scale use of distributed computing over the Internet for research purposes, analysing radio signals (using digital signal processing) to look for signs of extra-terrestrial intelligence.
•    The Folding@Home project – using a statistical simulation method for protein folding.  This used consumer computation (such as Playstation 3 cores and GPUs) to achieve millions of CPU days in short periods of time; it was much faster than the platform powering the SETI projects, and one of the largest scale distributed computing systems ever created.  In Jan 2010 it simulated protein folding in the 1.5ms range – thousands of times longer than previously achieved.
•    Netflix prize – a global competition for collaborative filtering of movie recommendations.  The competition was started in October 2006, and by June 2007 there were 20,000 teams competing.  The Netflix grand prize of $1m was eventually awarded in September 2009. The training data had 100 million ratings over 17,700 movies, and the winner had to beat Netflix’s Cinematch algorithm (RMSE .9514) by at least 10%.  Netflix’s own algorithm was beaten within 6 days of competition start.
•     Other examples of competitive or prize-driven activity: Millenium problems, 99Designs.com, Kaggle.com – like the Netflix prize, many of these would not be possible at large scale without internet enablement.
•    Human Genome Project – was a good example of exponential progress, where the pace  far outstripped predictions.
•     Citizen Science, arXiv and various open science initiatives show a trend towards more open and collaborative approaches to science.
•     Polymath project – in a 2009 post on his blog, Timothy Gowers asked “is massively collaborative mathematics possible?”. This post led to the collaborative solution of a hard combinatorics problem through blog comments, in around 7 weeks. The success of the so-called “Polymath1” problem has spawned additional Polymath collaborations.
•    In 1999, Garry Kasparov played and eventually won a game of chess against a “World Team” which decided its moves by the votes of thousands of chessplayers, including many rank amateurs.  Kasparov called this challenging game “the greatest game in the history of chess”, and co-authored a book as a result, constituting the longest analysis of a single game of chess.

These examples of massive collaboration and computation are not isolated curiosities, but the leading edge of a new approach to solving problems, that shows how knowledge and creativity can be harnessed by new technologies.  They show that there is an emerging new way of working which can be surprisingly effective.

Consider now the typical Operations Research researcher/academic (and, to a lesser extent, many OR practitioners):

•    They care about the HOW: algorithms, parameters, results
•    They want access to powerful computation and solvers
•    They want to collaborate with colleagues
•    They want to replicate and build upon the work of others
•    They’d like their contributions to be recognised – in both senses of the word: both visible, and valued.

The usual current way of working, through journal paper submissions, does not necessarily cater well for these needs.  Why not?  Consider a typical OR paper, which might contain an Abstract, a Literature Review, a Statement of a Problem, an Approach, some Results and experiments, and a Conclusion and References.  Such papers have some good properties: they are generally self-contained (you can find a lot out about the given problem once you’ve found the paper); they’re peer-reviewed, and – tellingly, perhaps – they’re easily replicated or extended by same author(s) (for their next paper!).  However, they are very often:

x    Difficult to search for and find – papers are often held behind pay-walls or restricted access (e.g. in University collections).
x    Hard to reproduce the results – in Operations Research, implementation details (parameters, source code, data sets) are important, but these aspects are typically not peer-reviewed – so how can the computations be verified, or replicated by others?  They fail Karl Popper’s basic test of falsifiability in science.  Even when source code is made available, it has likely been run in a completely different environment, making proper comparison of results or run-times difficult.
x    Long gestation times – the publication cycle from submission to publication is often extensive, and includes opaque periods when papers are reviewed.  This tends to encourage a culture of “the scoop” – a pressure to publish a new result before someone else.  Moreover, arguably the entire publication environment is self-serving and institutionally locked in (by universities and journal publishers).  Neither is it very democratic; when students and researchers leave academia and become OR practitioners, it can be difficult to keep close ties.
x      “Soft” links – the references in papers are usually simply textual, not hypertext links, even when published electronically.

The OR community prides itself on the Science of Better, of finding optimal or near-optimal ways to improve the world, but its own internal mechanism for improvement is itself inefficient (sometimes highly so).  In particular the process does not much allow for micro-contributions; the slow journal-focused approach inhibits rapid incremental development (which is somewhat ironic, considering that many papers offer slight improvements upon previously published results).  Nor does it make it easy to discover the state of the art for a particular problem – e.g. what are the best algorithms, the best test instances, and the best known results on those instances.  This must often be laboriously uncovered: even when the papers embodying the best known results are found, the associated data sets are non-standardised and often held on someone’s home page website; there is no standard code (and very often, no code at all) for reading in the data sets, and sometimes interpretation of the data is needed as documentation is lacking.  Certainly there are isolated but laudable aspirations towards “Reproducible Research” in science, but it has not been generally adopted by the OR community.

The question therefore becomes: How can we harness new technologies for Internet-based collaboration and crowd-sourcing, to do Operations Research more effectively?  What if we could complement (or even avoid!) slow journal-focused progress?  How can we improve upon silo’ed academic results and source code?

We can remove IT burdens by using the power of cloud computing – delivering powerful computational and collaborative capability through the browser, being flexible about how the underlying computational power is employed, and storing the data in the cloud.   Software as a service, or open-sourced, releases us from capital cost investment (which also allows us to switch more easily and not get too “invested” into one way of working); instead of the pursuit of mathematics research getting buried under software installation, environment issues and management (the tail wagging the dog), we can again make computation serve us.  Using servers in the cloud is usually not time-shared (as in the bad old days) – in fact computation power is generally cheap enough that we can now even be wasteful of server resources.  We can “spin up” compute instances and pay per hour if we like.  A good example is Gurobi, the leading commercial LP/MIP solver, which now offers Amazon EC2 instances prebuilt with Gurobi which can be essentially “rented” by time (in small or large amounts), so that no installation is necessary.

We can share models and data by using versioned wiki-style tools that store models and their results in the cloud.  I lay out a detailed vision for how this might work later.

We can aim to build an online, global OR community that collaborates frictionlessly, using forums and social network integration on a common platform.  Log-ins would facilitate a user and user group system; websites (such as StackOverflow and its ilk) also show that user reputations can work.  Particularly, a common platform can also build bridges between academics and practitioners.

If these aims seem ambitious, it’s worth pointing out that many of the pieces of the puzzle we need are already (conceptually at least) in place:

•    www.or-exchange.com / math.stackexchange.com
•    OR Librarywww.exampleproblems.com
•    www.wikipedia.org / www.proofwiki.org
•    Gurobi via AWS + Python
•    MathML / MathJax
•    Tons of modelling languages (Mosel, AMPL, etc)
•    Online IDEs – e.g. CodePad, Eclipse Orion, etc.
•    Collaborative referencing tools e.g. www.mendeley.com
•    Cloud storage and cloud platforms (examples are Google App Engine, Heroku, etc)

Some of these examples show skeletal websites whose communities have not really “taken off”.  Part of the reason is that they are not open enough – for example, OR Library, probably the biggest resource for OR data sets on the Internet, currently resides on someone’s home page – it is not editable (you cannot add a data set for a new problem for example), or integrated (even with basic links) to the associated results or source codes; nor does it allow for comments or discussion.  Other tools lack adoption simply for not being powerful, fully-featured or wide-ranging enough to attract a broad user base.

So we have tools which form the pieces of the puzzle, they just need to be put together well to serve a more ambitious purpose.  What is this ultimate purpose? I claim that our goal should be global crowd-sourced Operations Research in real time.

Here’s one way that it might work.  In the pictures I’ve cheekily called this rough mock up of such a platform “OR 4 ALL”.

The different types of pages are:

A Problem page describes a mathematical problem, for example, “The Capacitated Vehicle Routing Problem”.  These pages will look similar to the Wikipedia page, except that they will use a math-aware markup language to facilitate easy authorship (Wikipedia pages use images for equations, which provides a barrier to making micro-changes).  These pages will provide links to the associated data set pages, and the associated algorithm pages (see below).  These links will likely be ranked in some way (for example, the algorithm that has the most community votes might rank first, and there might also be an automatic ranking of the algorithms that have produced the best known solutions to the widest range of data sets).

Algorithm Approaches pages describe an approach to solving a problem.  Here there might be also be a discussion of the parameters and their effects, and perhaps some proofs that are specific to this algorithm.

Data Set pages list the data set instances for a given problem, and include links to associated results.

Implementation pages hold modelling language source code.  Python would probably work best for this.  The page would contain an online IDE (essentially, allowing the users to code right inside the browser).  Crucially, however, there is a Solve button, where you can apply the implementation to a data set, and the system will run it for you and compose a Results page with the solution.

Results pages, like the other pages, include history and versioned snapshots.  However, for this page it links right back to the implementation as it was run, in a timestamped fashion.  That way, if the implementation changes, the result can still be replicated.

Hence, you do not need to leave the OR4ALL web application in order to go right through from composing an algorithm, implementing it and running it, and comparing its results to the state of the art.

To various extents the different types of pages will have the following features:
•    WYSIWYG editing
•    versioning
•    comments
•    API access
•    voting
•    social media integration
•    links (e.g. related data sets & problems, and external links)
•    opt-in alerts for updates

The arrows shown in the diagram are typically “one to many” links.  For example, there will be many algorithmic approaches to one particular problem.  Similarly, for any one problem clearly there are also many data sets.  Each data set/algorithm combination can be run many times with small parameter or implementation changes, giving many result sets.

All changes and contributions are versioned, timestamped and archived.  Contributions are attributed to the appropriate user.  The entire system is searchable, with everything linked.  It allows for rich content, e.g. videos and animations where there is a pedagogical angle, and graphs and other analytical components where there is a data visualisation need.

The data would be also able to be accessed – potentially in unforeseen ways – via open APIs.  That is, other completely separate systems would be able to repurpose the data programmatically; as an example, a completely different web application could be devoted to analysing particular test instances and their results, and it would enjoy direct (read only) access to the required data via API to the OR4ALL system.

Cloud computation is achieved via open APIs.  The algorithms are proven and reproducible for given data sets, so practitioners or others looking to build further on an approach do not have to start from scratch; similarly, practitioners have a starting point if they want to expand or implement an algorithm in their own software.  Furthermore, other, non-default computation engines can potentially be “plugged in” behind the system (perhaps specific ones aimed at particular requirements, e.g. massively parallel dynamic programming).

Clearly a platform like this would be – at a minimum – an amazing boon for students.  But all users would benefit from the open, “many eyes” style of model and implementation development – the phenomenon that “somewhere out there, someone can easily help you”.  In particular, researchers and practitioners can find areas of common interest.

The system as described can start simple but grow in sophistication.  Reputation points might be gained for best solutions, with user points gained from previous best solutions decaying away over time.  Companies can offer bounties for the best solution to their difficult data set.  There would be plenty of opportunities for 3rd parties to provide a marketplace for alternative computation or analysis plug-ins via the APIs. Or, the system could potentially be monetised via advertising – for example, commercial OR companies could proffer their own software packages and services, with these ads appearing only on pages associated with that company’s specific areas of expertise.

Examples of the everyday use of this kind of system in practice might be (from a typical user’s point of view):
•    I get a notification email one day that someone is trying a new approach to solving a problem on a data set that I recently optimised;
•    I get a small boost in reputation because users upvoted an approach I suggested to someone else’s algorithm;
•    I use a proprietary piece of software (completely outside of the OR4ALL) system to find a new, closer-to-optimal solution to a hard problem, and upload it.  It ends up in the linked Results page for that problem, with an icon marking it as “unverified” (as it was run outside the OR4ALL computation framework);
•    I upload a teaching video to an Algorithm Approach page to help the students that Iam teaching.

Clearly there are some potential challenges to be overcome: the signal to noise ratio for one (keeping unhelpful contributions from polluting or distorting progressive development).  A system like this would also likely enforce standardisation of formulations, and to a lesser extent implementations – although this arguably has more benefit than downside.  Many in academia might also find it hard to move away from “research grant and published paper”-driven development, or otherwise be uncomfortable with voting or reputation-based status systems; clearly we would need appropriate and effective metrics to measure the contributions by each user or user group.

I firmly believe that this way of working will come about no matter what; I’d be very surprised if in 10 years time this more collaborative, technology-enabled approach had not overtaken journal-based research.  So wouldn’t it be better for the OR community to be self-directed and evolve in a very aware way towards this goal?  It might not even be particularly hard to build – as we have shown, most of the pieces are already proven possible in some form or other.  The possibilities and benefits of the system I have described are exciting and endless – if there was a truly open and powerful platform for OR with good community adoption, it would attain uses and purposes which we can now barely imagine.

Further reading
Reinventing Discovery – a book about internet-powered collaborative science
Wakari – a new, hosted Python data analysis environment that has some of the features described above

The Launch of Biarri’s WorkBench

With the impending launch of Biarri’s workbench and our ongoing close relationship with Schweppes for the daily routing of soft drink deliveries (an application of perhaps the most well known operations research problem: the vehicle routing problem), I thought that the following excerpt from a journal article submitted to the Asia Pacific Journal of Operations Research would be a very timely blog post.

The journal article is entitled “Real-Life Vehicle Routing with Time Windows for Visual Attractiveness and Operational Robustness” and it describes the vehicle routing algorithm we have implemented for Schweppes.

The excerpt details a specific example encompassing two things we are very passionate about at Biarri. First “Commercial Mathematics” – that is making OR (well not strictly just OR) work in the real world. And second, the revolutionary capabilities that the advent of cloud computing has for the delivery of software.

“Vehicle routing problems manifest in a remarkably wide range of commercial and non-commercial enterprises. From: industrial waste collection to grocery delivery; underground mining crew replenishment to postal and courier collection and delivery; inbound manufacturing component transportation to finished car distribution; in-home primary health care delivery to pathology specimen clearances from surgeries for analysis; and from coal seam gas field equipment maintenance to beverage distribution, to name but a few.

Automated planning systems used by industry at present are predominantly client-server or desktop based applications. Such systems are often: expensive, requiring a large upfront capital investment; accompanied by a large software deployment project requiring initial and ongoing IT department cooperation; customisable to a particular organisations requirements, however commonly retain a large amount of exposed functionality due to the breadth of the existing client base; and require substantial user training as the workflow is usually not restricted in a linear fashion …. Each of these characteristics constitutes a barrier to adoption of automated planning systems, and for most small to medium enterprises these barriers prove insurmountable.

With the advent of cloud computing and software as a service (SaaS) these barriers are being removed. SaaS: embodies a different commercial model; has essentially no IT footprint; mandates (as vendors may never directly interact with potential clients) simple intuitive linear workflows; and involves almost no end user training beyond perhaps an optional demonstration video.

The emergence of this new avenue for the delivery of optimisation based planning systems heralds, a heretofore, unparalleled opportunity for operations research practitioners to engage with a wider potential consumer base than ever before. However, the nature of the delivery mechanism requires the algorithms developed: to be robust and flexible (within their domain of application they must be capable to dealing with a wide range of input data); to have very short run times (the user base is more likely to be under time pressure than ever before); to produce high quality solutions (noting the inherent trade off between run time and solution quality); to be wrapped in a simple linear workflow (meaning it is always obvious what the next step in the planning process is); but above all, be able to produce real-life, practically implementable solutions, without the need for user training and/or experience.

For pure delivery, or pure pick up vehicle routing applications, real-life, practically implementable solutions are often synonymous with geographically compact, non-overlapping routes with little or no intra-route cross over. There are numerous reasons why such solutions are preferred …. If a customer cannot be serviced at the preferred time (e.g. the vehicle cannot get access, the customer is closed, another delivery is taking place, the customer is too busy), because the route stays in the same geographical area, it is easy to return to the customer at a later time. During busy traffic periods drivers are loathe to exit and re-enter a motorway to service individual customers. Even though such customers may be enroute to the
bulk of the customers the route services, thus incurring a minimum of additional kilometres, they may nevertheless be far from the majority of the customers the route services. If there is severe traffic disruption, it is easier to use local alternate routes between customers in a route that is geographically compact to ensure that pick-ups or deliveries can still be made. Third party transport providers, which prefer routes to be as simple as possible, may exert some influence over the planning process. Finally … it is easier to maintain customer relationships by assigning drivers to routes that routinely service a similar geographical area. In summary, solutions which are more visually attractive are more robust, and thus more likely to actually deliver the full extent of the cost savings that should flow from the use of automated planning systems.

This paper describes an algorithm for the vehicle routing problem with time windows, …. The algorithm is: robust and flexible; fast; wrapped in a user interface utilising a simple linear workflow and so requires no user training or experience; and produces high quality, visually attractive and practically implementable solutions.”