Posts

SaaS Optimisation vs Traditional Approaches

SaaS Optimisation solutions have become a necessity in modern business. These solutions don’t just result in lower costs, better products, and ease of use, but better customer satisfaction, employee satisfaction, and streamlined business processes.

Gartner predicts the SaaS market to be $85.1 billion globally by 2019, up from $72.2 billion in 2018. While much of this growth will be the continued adoption of infrastructure such as CRM and ERP, point solutions will continue to emerge to fill gaps.

What are point solutions and why should they matter?

Unlike ERP systems that provide end-to-end solutions for problems that you might not have, point solutions are targeted to specific pain points within your business. This might be: The best way to route your vehicles, the best way to manage your employees, or, the best way to build your facilities (location and size).

Point solutions also give you the capabilities of integrating with larger ERP systems.

A unique problem requires a unique solution.

The Key Benefits of SaaS

The benefits to a SaaS solution over a traditional approach can be evident by looking at these 5 key areas:

  • Reduced time to benefit. SaaS development reduces the feedback cycle to the point where anyone, anywhere in the world can test software on a web browser.
  • Lower costs. You don’t require expensive IT infrastructure and big IT departments, these worries are outsourced to the company offering the solutions.
  • Scalability and integration. SaaS solutions can be scaled up and down based on your requirements. This means you can drive more powoer in peak times or seasonality.
  • Faster releases (upgrades) as it reduces the feedback loop and allows for much easier troubleshooting.
  • Easy to use and perform proof of concepts

These benefits have allowed Biarri to develop world class tools for operational and strategic business requirements.

SaaS vs Traditional Software at Biarri

Beyond the above generic benefits, there are a number of specific benefits that we see at Biarri. We believe that the SaaS approach is far superior to ensuring the client gets a solution into a production environment sooner rather than later, it also makes the job of support much more efficient as well. Biarri’s specific benefits it sees are outlined in the below table.

SaaSTraditional Software
DevelopmentEasy to get constant feedback using the Mitigation by Iteration approach. More iterations and feedback from clients can significantly reduce development timeframes.The client either needs to wait for the final version to be installed in their testing environment. If iterations are used they are usually far less frequent (and more difficult to manage) than a SaaS approach.
User Acceptance Testing (UAT)All the client needs to do is use a testing environment URL. No UAT specific hardware is required by the client.Often clients need to setup different computers and/or databases for the UAT environment.
DeliveryClient usually only needs to type the production URL into a web browser. Sometimes their firewall needs to be configured to allow access to our servers.Software has to be installed on each machine which requires more resources from IT departments.
Hardware Clients only need access to an internet connected device. It does not need to be powerful. It could be a desktop or mobile device.Each user needs to have computing power, and memory, required to run potentially resource hungry optimisations.
Software CostOur optimisations sometimes use expensive third party tools (like Gurobi) which are on our servers. This cost gets distributed to all our clients in the form of licensing fees.Each user would need to have the expensive 3rd party software installed on their computer which has the potential to massively increase the final price if there are many users.
SupportWe can easily replicate issues because your data and logs are on our server. This often results in issues being replicated, addressed, and a fix delivered all in the same day.Often it can be difficult to even determine if an issue is caused by the software, or the complex IT environment that it may be running in. Complex IT environments may have internal firewall issues, database issues, Citrix issues, authentication issues, timeout issues, etc. If the problem is with the software it is sometimes required to export the client database and transfer it to the vendor (via ftp, or if it’s too large via snail mail). Sometimes replicating the issue can be difficult because the client’s and vendor’s environment are different (e.g. different database versions, different patch versions or operating system versions). This process can often take weeks even for relatively simple issues.

Get in touch

  • Hidden
  • Hidden
  • Hidden
  • This field is for validation purposes and should be left unchanged.

3 common barriers to optimisation

Cybernetics

Making it Accessible: Applying the Optimisation Toolkit

At Biarri Optimisation one of our core priorities is to uncover interesting new mathematical problems in industry, build optimisation engines to solve them, and use those products to incubate new Biarri entities.  It is a constant source of amazement at how many disparate contexts that we can apply the tools of our trade: in particular, the tool of Mixed Integer Programming, which is one of the pillars of our discipline of Operations Research. Tools like this live in our optimisation toolkit.

Operations Research (OR) has its origins in World War II, where mathematics and analysis was used to determine what convoy size was most effective for avoiding German U-boat detection, the best paint colour to maximise camouflage for aircraft, and to identify the best trigger depth for aerial-delivered depth charges to maximise U-boat kills.  You can read more about these fascinating applications and more in the excellent book Blackett’s War.

The problems we solve

We tend to think of the problems that we try to solve in the following way.  First, we identify the decision variables that are involved: what elements of an operation do planners and users have the power to change?  Second, we establish the objective: what are planners looking to minimise or maximise?  This might be cost, or some combination of minimising cost with maximising benefits or safety (for example).  Lastly we pinpoint the constraints: what are the real-world limits on resources (people, equipment, budget) that have to be taken into account?

An example is a daily workforce planning problem faced by warehouse and distribution managers in many different industries.  Here there is a “profile” of required work across the day, of various types (e.g. forklift work, truck loading/unloading, racking replenishment, etc), and a set of personnel who have different skills and availabilities to do this work (permanent, casuals, with various possible start times).  How to cover all the work while keeping the total shift cost as low as possible?

The objective in this example is cost; the decision variables are the allowed shifts that can be operated; and the constraint is that all the work must be done by the appropriately skilled personnel.  The extra complexities we might encounter here come about where people can do multiple task types (e.g. they are qualified for both forklifts and manual truck loading), and there may be a changeover time incurred between tasks; they will also require meal and rest breaks, or incur overtime; there is fatigue to take into account (productivity typically starts dropping later in a shift); and equipment and space/congestion constraints.  You can even apply a productivity multiplier if different people are more or less productive at different tasks. Add to this the fact that some of the work can be done well ahead of time (pre-picking for truck loads, for example), which allows the work profile to be “smoothed” over, and you have quite a complex planning task indeed!



When we formulate this as a Mixed Integer Problem (MIP), we aim to solve it for the overall minimum cost.  This overall minimum cost is known as the global optimum, and it may be that, if you look at any one part of the solution in isolation, it might seem “sub-optimal”.  In our workforce planning example, for instance, a person might be allocated a very short shift. But this will make sense in the overall sense of the problem: it might be that there is a small piece of work left over that cannot “fit” into all the other shifts.

Feasible solutions

Part of the art of applying this type of mathematical approach lies in making sure that there is always an implementable solution.  For example, what if there are not enough people to cover the work? We do not want the solver to tell the user that the problem is simply “infeasible” – that there is no plan.  Instead, we build in extra decision variables which model the unallocated work over time, and we give these variables a high artificial cost in our objective (so the solver still tries to cover as much work as possible); now we can also report to the planner how much work is uncovered.

This technique is an example of a more general method which distinguishes between hard and soft constraints.  Hard constraints must always be met; there is no wiggle room.  By contrast, soft constraints (as in the “uncovered work” example) allow you to break a constraint, but incur some penalty for doing so.  In the workforce example, overtime can also be thought of as a soft constraint (once you exceed the regular shift limit, you incur extra cost).

Soft constraints can also be used as a way to “explore” the solutions that are near the optimal solution – there are some trade-offs you might be willing to make in order to save an extra chunk of cost.  You sometimes have to be quite careful when there are several types of soft constraint or extra penalties: for example, allowing very large penalties can obscure the smaller components of the objective. When you have multiple competing objectives, it can also sometimes be hard to explain why the result looks like it does.

Trusting the results

Of course, we must also exercise caution when trusting results of a mathematical process.  Our solvers will make arbitrary decisions when we do not provide guidance – a common instance is where there is a “shallow” cost function, which is where there are several similar solutions with the same or nearly the same cost.  These might be identical from an optimisation point of view: in our workforce context, an example is where several people with the same skills and availability might be able to do the same task: if it makes no difference to the total cost, who should we choose?  Often there is a real-world analog to this problem of “breaking ties”: here, for example, we might choose the employee based on seniority; or to ensure a longer term attractive roster; or a host of other factors.

Biarri has always strived to make optimisation “accessible” – part of which means making our optimisation results implementable in the real world – and in these examples I hope you have seen some of the nuances and complexities that underlie this promise as well as the tools that comprise our optimisation toolkit.

By Andrew Grenfell

Get in touch

  • Hidden
  • Hidden
  • Hidden
  • This field is for validation purposes and should be left unchanged.

The Biarri Applied Mathematics Conference 2016

Optimisation. For Real. The BAM Conference 2016

The world is changing around us. Optimisation– which used to be hidden behind closed doors is now at our fingertips shaping the way in which we live our lives, move our goods, and make our decisions. From healthcare, sports, transport and manufacturing through to the deployment of next generation fibre networks, optimisation is core to working in the 21st century.

We are excited to announce that the Biarri Applied Mathematics Conference is on again for 2016. Registrations are now open and this year the conference will be held in Brisbane, Australia, on June 28 and 29 at QUT Gardens Point with support from The Queensland University of Technology, The Australian Mathematical Sciences Institute and Biarri.

The Biarri Applied Mathematics Conference is a free two-day event that brings together innovators, thought leaders, industry and students to bridge the gap between academia and practice. It’s a perfect opportunity for networking, seeing how maths is actually being applied in the real world and discovering the benefits in applying mathematical techniques across business.

This year we’ve got some awesome presentations that are set to make you think about how we can be doing things better:

  • Evan Shellshear, Simultek, How to beat the best at optimisation
  • Stephen Gay, MIDAS Tech International, Machine learning methods for mineral processing
  • Robin Pearce, University of Queensland, Extending the MIP toolbox to crack the Liner Shipping Fleet Repositioning Problem
  • Michael Forbes, University of Queensland, Reviewing the Decision Review System in Cricket
  • And much more.

Head over to the BAM website to find out more and register now! Join us this year to see real optimisation in action and learn about how it’s used to keep our world moving.

Register for BAM now!

Death by parameters

In my previous blog post I wrote about the great flexibility and power of Genetic Algorithms. So you may have thought; why do I need help with my optimisation problems? One can just simply grab an off-the-shelf Genetic Algorithm and use it. However, as with everything, there are always two sides to every story and this time I’ll show you why optimizing with Genetic Algorithms is much harder than it seems.

The flexibility of Genetic Algorithms arises, in part, from a flexibility to choose a dizzying number of parameters. When writing your own code you potentially have to decide on things such as the number of competing optimal solutions, the number of times to improve them, the mutation and crossover probabilities, the percentage of the population to eliminate in each generation and many more.

With so many choices, choosing the parameters correctly can determine whether the algorithm bears fruit or withers and dies. This difficulty has lead to many papers on the best way to choose parameters. Unfortunately even if one is able to choose good parameters for one problem, this is no guarantee that the same parameters will work for the next problem.

So over the years researchers have searched for other powerful optimisation techniques which don’t suffer from such a parameter overload. From this research we now have a number of promising algorithms. In particular, in 2009 Xin-she Ying and Suash Deb came up with the ultimate of all parameter starved algorithms, the Cuckoo Search Algorithm. In this algorithm there is one parameter. Yes only one.

A reed warbler feeding a young Cuckoo (from Wikipedia)

The Cuckoo Search Algorithm is inspired by the parasitic nature of some cuckoo species such as the European common cuckoo. These species lay their eggs in the nests of other host birds in an attempt to trick the host to raise their own nestlings. Sometimes this devious trick succeeds. When it doesn’t, the host bird either throws the egg over the side of the nest or simply abandons the nest altogether.

In the Cuckoo Search Algorithm the cuckoo’s ploy translates into an optimisation algorithm via four idealized rules which are repeated until the desired optimisation criteria are fulfilled. In the following algorithm each egg represents a solution and by a cuckoo laying an egg, we mean create a new random solution:

  1. Each cuckoo lays an egg in a random nest.
  2. Out of all laid eggs keep a number of the best eggs equal to the number of cuckoos.
  3. Abandon a fixed fraction of the worst eggs.
  4. Repeat

Find the parameter? The single, lonely parameter in the Cuckoo Search Algorithm is the fraction of the worst nests that are abandoned. This parameter affects how thorough the algorithm searches all possible solutions and so a lower value means the algorithm will find a local optimum faster (although maybe not a desired global optimum).

The avian-inspired algorithm has been used in numerous difficult problems to oust other optimisation methods out of their leadership position. For example, it has been used for spring and beam design problems , scheduling problems, the famous traveling salesman problem and even optimisation challenges in nanoelectronics! Like most other heuristic optimisation methods, the areas of application can be quite astounding.

So now that you’ve canceled the download of a promising Genetic Algorithm and started one of a new Cuckoo Search Algorithm, I thought I’d warn you again that there’s another side to this story too. Although the bird-based algorithm makes parameter choice simple, it may or may not be your best choice for a given optimisation problem. There are many heuristics for optimisation problems and choosing the right heuristic is probably much harder than choosing the right parameters for a given optimisation method. But you don’t have to worry about your precision nest eggs because luckily you’re on the website of a company competent enough to help you with this choice.

Biarri Optimisation - 21

One tech to rule them all

Back in high school when you were battling polynomials with derivatives, you learnt one of your first all-purpose optimisation techniques. Given a function, take its derivative, set it to zero and solve for all optima. Voila! After you had coerced the equation to give up all its maxima secrets, you could then be safe in the knowledge that you had found the optimal values.

If you enjoyed computing the optimal speed to jump your calculus hurdles, then you probably went on to university to learn more about other mathematical tools. Once there you most likely met your second all-purpose optimisation technique, Linear Programming. Given a simple linear function with linear constraints, you learnt that you could use any of a number of tools such as the Simplex Method to easily and quickly solve such problem.

Ant Colonay Optimisation in Action

Figure 1 Ant Colony Optimisation in Action

From there, if you had more of an applied orientation, you probably went on to learn about the exotic optimisation creatures. For example, Genetic Algorithms, Simulated Annealing as well as a host of other nature-inspired algorithms such as Ant Colony Optimisations, Swarm Optimisation, Artificial Bee Colony Optimisation techniques, etc. In the 1990s and 2000s, for almost any type of problem, it seemed that one could find a new nature-inspired optimisation technique to solve it better than previous more mainstream tools.

However, in spite of the swarm of new optimisation tools, most problems for which these new techniques were developed could be solved with existing tools. Although probably not as efficiently. Hence the question was which method was best? The answer to this question came in 1995 when two researchers, David Wolpert and William MacReady at the renowned Sante Fe Institute, proved a number of theorems collectively referred to as the “No free lunch” theorems. These results could be seen as implying that there is no one optimisation method that is best for all problems. In addition, when we average over all problems, we expect all methods to be equal.

This result has important and deep implications. It means that if you swear by a single generic optimisation method and try and use it to solve all your problems, then don’t expect it to perform better than your three year old son who guesses random solutions to all your different stochastic multi-objective constrained optimisation techniques.

Given this it would seem strange then that I am about to suggest the idea of solving numerous problems with a single optimisation technique. Besides the fact that a number of these problems don’t look like optimisation problems at all! The reason for doing this is to see the power of an interesting optimisation technique, as well as its flexibility and generality. In addition, one needs to bear in mind that the “No free lunch” theorem is a result about every imaginable optimisation problem, which is just a tad more than the few I will touch on here.

The ST5 antenna’s final optimised design

Figure 2 The ST5 antenna’s final optimised design

The class of optimisation technique that I want to discuss here is generally referred to as Genetic Algorithms. They have been successfully used on thousands of research and industrial problems and continue to amaze researchers with their potential to solve problems far beyond their original scope. For example, one of the most famous applications of Genetic Algorithms was by NASA in 2006 to develop a new antenna design for their ST5 antenna. It can be seen that the optimal design was anything but intuitive and most likely would not have been found by a “standard” optimisation technique based on initial human guesses.

So what type of problems would be considered inappropriate for Genetic Algorithms? First of all, you couldn’t do much worse than write some code to give to your son or daughter so that in their next algebra exam they can solve x + 5 = 6 with your tailored made Genetic Algorithm. They could just let your code chug away and sit there patiently while it aces the exam for them. Although not probably the most effective use of Genetic Algorithms, it is entirely possible.

So let’s take that thought one step further. What about solving the humble quadratic equation with a genetic algorithm? It has been done (and done againand again). But the quadratic equation belongs to pure mathematics right? In addition, it’s an equation you can solve directly isn’t it? Yes and yes but interestingly enough Genetic Algorithms have started to make their way into some of the purest area of mathematics to help solve problems that are stumping pure mathematicians, This is truly one area where you would not expect the tools of applied mathematicians to come to the rescue of pure mathematicians.

We have only scratched the surface of unexpected applications of Genetic Algorithms. In fact, they have made an appearance in almost every endeavour of research from physics to economics, from architecture to chemistry and even all the way back to their nature-inspired beginnings with numerous applications in biology. So in spite of our knowledge that there is no one method to solve all problems, Genetic Algorithms present us with a versatile and powerful tool that seems to have a lot more in store for our future problem solvers.

Read the second part here.

On The Blog

Is your decision making based on Analytics or Gut feeling?

Despite the proliferation of data and the constant drive to adopt innovation to achieve competitive differentiation; many businesses continue to make critical business decisions that are a product of intuition and haste rather than fact and rigor.

40% of major decisions are still based on your Manager’s gut feeling

The question as to why companies continue to rely on ‘best-guesses’, despite the availability of advanced business analytics, quantitative models and optimisation methods is confounding. Using data and quantitative analysis to support decision making, removes ambiguity and improves speed and accuracy. Decision making is more likely to be correct and the process has more rigor due to the application of the scientific method.

Read more

2 years at Biarri

LinkedIn recently reminded me about my 2-year anniversary at Biarri. It feels like longer (a colleague has called it “Biarri time-dilation”), and I think that is a good thing.

In my previous job with an multinational corporate, I had a comfortable position running a great development and support team essentially for a single, well-earning COTS product. The product has an international client base, and that meant I scored a few business class trips each year to meet with generally happy customers. Needless to say, it was pretty sweet for a number of years, and again, comfortable.

But as the years passed, the software was ageing along with its core technologies. I had worked on the large code base for so long, I can still recall its structure. I wasn’t learning anything new – it was time to move on.

I’d worked with a few guys that had left the company for various reasons, and increasingly frequently I would hear of “Biarri” – a startup company that was based on accessible mathematics delivered in SaaS fashion. Biarri was gaining momentum, and the more I heard about the kind of projects and people involved, the more keen I was to look into a possible move away from my comfortable office.

I met with one of the Biarri founders, Joe Forbes, over coffee at his favourite cafe (in what we started calling his “coffice”). I liked what I heard – it seemed the Biarri guys had learned from the development mistakes we often see in software development in a corporate culture (“remember the Alamo”, Joe often says). Software was stripped back to the essentials, with the aim to have interaction design done first rather than as an afterthought (or worse still, by developers!). Interfacing with other systems was somewhat bypassed – using simple formats (e.g. CSV) to move data between systems. A mathematical approach was taken to solving each problem, by formulating each engine in a formal way to exploit the “power of maths”. The company was keeping its IT footprint as light as possible through the use of remotely hosted / cloud based computing resources, and had a strong focus on keeping costs low (I had always found the waste, and lack of care about it, frustrating within a corporate). They were using new web-app based technologies, and finally, they were growing! I jumped ship.

My probation period was hard work. Myself and another newbie – our COO George – were placed on a project aiming to model a redesign of a national parcel network, and some of the major Biarri players were on a well earned spate of leave. George took the reigns, and I was back in the developers chair, trying to cover ground as quickly as possible. My learning of “new stuff” started from that first week, and pretty much hasn’t abated since. As the project wore on, I got to team-lead a few of the more “junior” developers – however Joe and Ash are exceptionally good at hiring very clever people – not much leading was required. By the end of the project I had been reasonably well deprogrammed from my old corporate culture (it isn’t your job title that makes you in Biarri – its how you perform), I’d worked on the tech. stack from back to front, and was ready for the next challenge.

Since then, I’ve worked on a number of quite different scheduling optimisation software projects. Along the way I’ve learned about linear and mixed integer programming in the “real world” (not just on toy problems), and how the real difficulty can lie in the customised algorithms which feed just enough of a problem into commercial solvers without killing solve time. I’ve seen classic statistical, AI and dynamic programming techniques applied to problems as diverse as hub selection, vehicle routing, fibre network design, demand forecasting and resource allocation. I’ve learned how agent-based approaches can be used in the field of optimisation as well as simulation, and I’ve seen the company mature its software development processes to keep the machine working at the top speed it requires (where “agile” can also mean “predictable”).

As Biarri has rapidly grown over the last few years, so has the code base. It’s great to know refactoring isn’t a dirty word at Biarri. I think the key to keeping Biarri agile will be to avoid falling into the trap of developing large “in house” libraries, and not being afraid to throw away code – that should set us up to be able to refresh the “tech. stack” regularly (keeping us ready to use leading edge technology, not locking us into past decisions).

Just like any workplace, the people make or break the culture. Joe has a “hire slow, fire fast” policy, and this has made for a great place to work. It’s a pretty flat structure, and I hope it stays that way – sometimes someone has to be the boss, but everyone needs to be able and willing to get their hands dirty when required.

I can’t say I’m always “comfortable” at Biarri, but I wouldn’t have it any other way. Looking forward to the next 2 years.

Biarri Group

Optimising With Big Data

Can optimisation save the world? Well it depends on the problem.

Optimisation may not be able to solve all of humanity’s problems but it has recently made important strides to help avert the impending global warming disaster. As I’m sure you’re aware, the global warming crisis is rated by scientists worldwide as being one of the most important issues of the twenty-first century. For decades now scientists have been warning us that we need to wean ourselves off our dependency on fossil fuels. Although replacements to fossil fuel-based energy have existed for over half a century, none of them seem to have made a significant impact on the global warming issue yet. For example wind power can currently produce electricity at a price that is close to fossil fuels based electricity. However, in spite of the impending crisis, wind power is still to enter our energy mix in a big way. One of the reasons for this is not simply because of lack of political will but because of a lack of good optimisation tools. Although, in the last five years it seems that the winds of change are blowing because some of wind power’s biggest opponents have gone silent. So what has happened?

To be able to appreciate what has happened and why optimisation is so important, we need to understand what the problem is and to be honest, the problem is wind. Unfortunately, like visits from your in-laws, wind does not come when we want it to nor does it behave like we want it to (imagine generating wind power during a cyclone!). So, due to its unpredictability, naively using wind power requires companies to have back-up power plant on standby just in case the wind suddenly stops. Back-up power plants are much less efficient than a permanently on power plant and counter-intuitively a situation like this can lead to more CO2 pollution than if we had just used a coal fired power plant!

What has happened in the recent past, which has turned wind energy from a much debated dream into a reality, is an optimal use of wind turbines to generate energy. Due to such optimal use of wind turbines, in Colorado, USA, the energy company Xcel Energy (which mailed its customers to oppose a meagre 10% renewable energy contribution), is now planning on profitably sourcing up to 30% of its energy from wind by 2030 (Reference). This is more than the famous 20% by 2020 proposed by Germany (which is now coming under fire due to high energy costs) and owes its success to better weather prediction and an optimal use of the wind turbines.

The case of optimising wind turbine electricity production is obviously a complicated affair given the fact that wind can change its direction and strength continuously throughout the whole day. So, in its simplest form, the challenge for those operating wind turbines is, given predictions about wind strength and direction in the next moment, how should the turbine’s parameters be adjusted to maximise electricity output. These parameters typically translate into physical properties of the wind turbine such as the direction the turbine is facing, the angle of the turbine blades, the speed and torque of the turbine drivetrain, and so on. Given this significant challenge, even if we could perfectly predict the wind conditions in the next moment, we probably would have no way of knowing the perfect parameter values for each given set of wind conditions. To find a solution to this problem, scientists have turned to a set of tools used in big data called machine learning tools to predict optimal parameters (and also to help predict the weather too!).

Machine learning tools are typically used when there is a large amount of data available to teach a system to behave in a certain optimal way. In our case, it is to teach the wind turbines to adjust their parameters optimally to maximize their electricity output given the predicted wind conditions. Typical examples of machine learning techniques include Bayesian Networks (used everywhere from biology to economics), Support Vector Machines (often for recognizing handwriting) and, probably most well-known, Artificial Neural Networks (which were used famously in 2012 by Google to pursue the noble cause of identifying pictures of cats in videos).

In the current case of optimising parameter selection for optimal electricity output of wind turbines, one promising tool to find the best choice of parameters is Artificial Neural Networks. For those working in the industry, this choice may seem natural, however, implementing a good neural network is wrought with difficulties. One needs to choose a correct model for the function to be optimised, given the available data one needs to choose a good learning algorithm to train the neural networks correctly and finally the computations need to be robust to new and unseen inputs. It is precisely in these fields that we have seen significant advances (and also stellar buyouts such as DeepMind for $450 million in 2014) which have led to more efficient algorithms for parameter selection. In turn this has led to the recent adoption of more and more wind turbines to generate renewable energy due to the more electricity produced by each turbine. In 2013 in Australia alone, plans were underway to install another eleven wind farms adding 1627 MW to the grid (Reference).

However, when talking about advances in wind turbine technology what constitutes a significant advance? Let’s take an example of a country that is not even a major player on the world scene but exemplary when it comes to wind energy generation, Spain. According to the Guardian, wind energy was Spain’s top source of electricity in 2013. It generated approximately 54,000 GWh of electricity in 2013 from wind turbines. So assuming that we are able to improve our optimisation methods so that electricity production increases by a mere 0.1%, then, in Spain alone, we would generate an extra 54 GWh of electricity. Given that 1 MWh supplies up to roughly 200 households, this would mean that simply by increasing output by 0.1% in Spain alone, we can supply two cities with Sydney’s population with the extra electricity!
Facts like this are what cause insignificant sounding headlines by Siemens and GE to be such game changers. Recently Siemens claims to have had a breakthrough in their neural network algorithms that enables their turbines to produce one percent more electricity annually under moderate wind conditions. Given our previous calculation we can see how important this is. Even more impressive than this, given what a one percent difference can do, was GE’s headline that they are able to increase their wind turbine output by five percent (reference)! They haven’t stated specifically how they did it but their description of “big data” and “predictive algorithms” would lead one to guess at similar machine learning algorithms as for the Siemens success story.
As we can see all these small gains can add up and as the reader can guess, without the optimisation tools available today these feats would not have been possible. We can only hope that these improvements point to an industry that is poised to revolutionise our energy industry and, with the help of optimisation, maybe save the world.

Pre-emptive optimisation

For one of our long-standing clients we have been running vehicle routing optimisations on a daily basis. A file of daily orders is uploaded into our Workbench system, and is split up into several regions, each of which needs to be separately optimised. A planner works through each region, going through a series of data checks (e.g. location geocode checking), before hitting an “Optimise” button.

All of the heavy lifting is done on a server (the planner accesses it through a web app via a browser), so it’s possible that the server could silently start up the required optimisations without the planner’s involvement, and in the (fairly common) case where the region does not require any data corrections, when the planner is up to the optimisation stage, the result could be immediately available (as it has already been run, or is in the process of being run). This idea now been implemented and took only a short amount of Python code.

Furthermore, it runs in parallel as each optimisation is itself split into a separate child process (running a C++ exe) which Linux distributes across the 8 cores of our server machine.
The pre-emptive optimisations are kicked off using Python’s multiprocessing package, as follows:

from multiprocessing import Process

p = Process(target=start_preemptive_optimisations, args=(…))

p.start()

Control is returned to the user at this point while the optimisations run in the background. Results are stored in a folder whose name is stored in a database table; when the planner then comes to press Optimise, the system checks if there have been any data corrections – if so, it runs the optimisation from scratch as usual (the pre-emptive result for that region is thus never referenced); however, if there are no corrections, the system simply uses the stored result from the folder.

The end result for the user is that in many cases the optimisations appear to run almost instantaneously. There are really no downsides as we do not pay for our servers on a CPU cycle basis, so we can easily be wasteful of our server CPU time and run these optimisations even if their results are sometimes not needed.

One “wrinkle” we discovered with this is that we had to make our process checking more robust. There is Javascript in our browser front end that polls for certain events, such as an optimisation finishing, which is indicated by a process ceasing to exist. The Python for this is shown below, where “pid” is a process ID. The function returns True if the given process has finished or not.
def check_pid_and_return_whether_process_has_finished(pid):
if pid and pid > 0:
multiprocessing.active_children()   # reap all zombie children first; this also seems to pick up non-children processes
try:
os.waitpid(pid, os.WNOHANG)       # this reaps zombies that are child processes, as it gives these processes a chance to output their final return value to the OS.
except OSError as e:
if int(e.errno) <> 10:  # the 10 indicates pid is not a child process; in this case we want to do nothing and let os.kill be the function to throw an exception and return True (indicating process is finished).
return True
try:
os.kill(pid, 0)   # doesn’t actually kill the process, but raises an OSError if the process pid does not exist; this indicates the process is finished.  Applies to all processes, not just children.
except OSError as e:
return True
else:
return True
return False

Note the “reaping” of zombie processes here, and the complication that arises if a process is not a direct child of the calling Python process (it might be a “child of a child”). In this case we use a call to the (in this case rather mis-named) function os.kill.