## SaaS Optimisation vs Traditional Approaches

SaaS Optimisation solutions have become a necessity in modern business. These solutions don’t just result in lower costs, better products, and ease of use, but better customer satisfaction, employee satisfaction, and streamlined business processes.

Gartner predicts the SaaS market to be $85.1 billion globally by 2019, up from$72.2 billion in 2018. While much of this growth will be the continued adoption of infrastructure such as CRM and ERP, point solutions will continue to emerge to fill gaps.

### What are point solutions and why should they matter?

Unlike ERP systems that provide end-to-end solutions for problems that you might not have, point solutions are targeted to specific pain points within your business. This might be: The best way to route your vehicles, the best way to manage your employees, or, the best way to build your facilities (location and size).

Point solutions also give you the capabilities of integrating with larger ERP systems.

A unique problem requires a unique solution.

### The Key Benefits of SaaS

The benefits to a SaaS solution over a traditional approach can be evident by looking at these 5 key areas:

• Reduced time to benefit. SaaS development reduces the feedback cycle to the point where anyone, anywhere in the world can test software on a web browser.
• Lower costs. You don’t require expensive IT infrastructure and big IT departments, these worries are outsourced to the company offering the solutions.
• Scalability and integration. SaaS solutions can be scaled up and down based on your requirements. This means you can drive more powoer in peak times or seasonality.
• Faster releases (upgrades) as it reduces the feedback loop and allows for much easier troubleshooting.
• Easy to use and perform proof of concepts

These benefits have allowed Biarri to develop world class tools for operational and strategic business requirements.

### SaaS vs Traditional Software at Biarri

Beyond the above generic benefits, there are a number of specific benefits that we see at Biarri. We believe that the SaaS approach is far superior to ensuring the client gets a solution into a production environment sooner rather than later, it also makes the job of support much more efficient as well. Biarri’s specific benefits it sees are outlined in the below table.

 SaaS Traditional Software Development Easy to get constant feedback using the Mitigation by Iteration approach. More iterations and feedback from clients can significantly reduce development timeframes. The client either needs to wait for the final version to be installed in their testing environment. If iterations are used they are usually far less frequent (and more difficult to manage) than a SaaS approach. User Acceptance Testing (UAT) All the client needs to do is use a testing environment URL. No UAT specific hardware is required by the client. Often clients need to setup different computers and/or databases for the UAT environment. Delivery Client usually only needs to type the production URL into a web browser. Sometimes their firewall needs to be configured to allow access to our servers. Software has to be installed on each machine which requires more resources from IT departments. Hardware Clients only need access to an internet connected device. It does not need to be powerful. It could be a desktop or mobile device. Each user needs to have computing power, and memory, required to run potentially resource hungry optimisations. Software Cost Our optimisations sometimes use expensive third party tools (like Gurobi) which are on our servers. This cost gets distributed to all our clients in the form of licensing fees. Each user would need to have the expensive 3rd party software installed on their computer which has the potential to massively increase the final price if there are many users. Support We can easily replicate issues because your data and logs are on our server. This often results in issues being replicated, addressed, and a fix delivered all in the same day. Often it can be difficult to even determine if an issue is caused by the software, or the complex IT environment that it may be running in. Complex IT environments may have internal firewall issues, database issues, Citrix issues, authentication issues, timeout issues, etc. If the problem is with the software it is sometimes required to export the client database and transfer it to the vendor (via ftp, or if it’s too large via snail mail). Sometimes replicating the issue can be difficult because the client’s and vendor’s environment are different (e.g. different database versions, different patch versions or operating system versions). This process can often take weeks even for relatively simple issues.

### Get in touch

• This field is for validation purposes and should be left unchanged.

## Mitigation by Iteration – Facilitating the Optimisation Journey

For companies to remain competitive, they require smart systems that solve their unique day to day business problems. However, when applying these systems many decision makers get lost in the complexity due to limited communication and collaboration within the implementation process.

We are at the cutting edge of the latest optimisation methodologies and web technologies. However, unlike many other optimisation, and analytics companies out there, one of our main goals is to make powerful optimisation accessible in the real world to bring value to our clients.

We specialise in the development of web applications and smart optimisation engines– delivered in less time than you would probably think. It’s not unusual for us to go from an initial workshop with a client, to understanding their problem, and then having a fully functional optimiser in a production environment within three months.

On top of this the same people are often involved through the entire SDLC (software development life cycle) i.e. from the spec/design, theory, implementation and delivery/support. This reduces the overhead many organisations incur by having different people in business analytics and developer positions. The people implementing the solution actually understand and work with you to solve your problem.

## Optimisation through the cloud – The Biarri Workbench

Optimising your business; by operating in the most efficient and effective way; is essential in delivering a greater competitive advantage and is key to driving business success.

But, how can you drive innovation, empower your business decisions and survive in a highly competitive and volatile global marketplace?

A KPMG Report on elevating business in the cloud found that,

As cloud adoption picks up pace, cloud is poised not only to grow in scale, but will also increasingly impact more and more areas of the business. They do not only result in cost savings, but they can help organisations increase workforce flexibility, improve customer service, and enhance data analytics. In other words, the cloud should be considered a key enabler of the corporate strategy, driving strategic business transformations of all kinds.

Gartner Research supports this by naming the cloud as one of the top 10 strategic technology trends for 2015 that will have a significant impact on organisations during the next three years.

### Agile Analytics driving business optimisaiton through the cloud

Anayltics delivered through the cloud empowers you to make adaptable decisions quickly, with far more rigour. Having access to your data anywhere, any time, on any device means that you no longer require large IT systems that are overly complex and not built for your specific problem. Cloud solutions allow you to scale up and down, and adapt depending on your specific target requirements. This means you can have point solutions targeting specific pain points within your business.

### How does cloud based optimisation fit in my business?

Optimisaiton can be applied to most business problems. The question you should be asking yourself is; What is the best possible outcome of the decisions i’m about to make?

For your Supply ChainHow can I best support effective capital decisions to ensure end-to-end efficiency? – Learn More

For your Logistics – How should I best manage my fleet, workforce, facilities and work communications? – Learn More

For your WorkforceHow should I best plan my workforce across the next few hours, days, and months into the future? – Learn More

For your AnalyticsHow can I be sure that I am making the right decisions and considering all variables? – Learn More

### How do we do it?

With our team of mathematicians, software developers and UI designers we use The Biarri Workbench which is an intuitive cloud based platform designed to support the rapid development on powerful web based software solutions.

With the powerful development platform of The Biarri Workbench, we are able to easily customise, alter, and build a solution for your specific business requirements.

The Workbench is Accessible. Empowering you to reduce your companies IT footprint and access world class optimisation anywhere, anytime, on any device.

The Workbench is Customisable. Built from the ground up to allow for rapid, bespoke deployment of software, built for your specific optimisation requirements.

The Workbench is Easy To Use. Through simple linear workflows, and customised visualisation widgets anyone in your business can easily master your software, reducing the need for long training and workforce upskilling.

The Workbench is Scalable. Regardless of business size or project complexity, through cloud based delivery, and bespoke software solutions built around your requirements, there is no more one size fits all approach.

The Workbench is Powerful. At the core of the Workbench are complex mathematical engines powered by industry grade commercial solvers. This means you can be certain in the justification around your decision making.

The Workbench is Efficient. Cloud based software delivery gives you the power to determine who sees what the data when– providing you with more control and rigour over your optimisation processes.

The Workbench is Secure. Security measures exceed both industry and customer requirements with the ability to easily accommodate your specific needs.

## SaaS deployments are now ‘mission critical’

Gartner recently published a survey citing that SaaS deployments are now ‘mission critical.’ Some of the key reasons behind this statement is that respondents looked for cost savings, an increase in innovation and accessibility to their systems as key drivers for the move away from local software solutions.

Joanne Correia, Gartner Research Vice President said,

#### “The most commonly cited reasons the survey found for deploying SaaS were for development and testing production/mission-critical workloads,” and went on to say “This is an affirmation that more businesses are comfortable with cloud deployments beyond the front office running salesforce automation (SFA) and email.”

This shows that companies are becoming more aware, and switched on to the benefits that cloud based software can bring to their company.

It was also demonstrated that on top of cost savings, accessibility, and innovation, SaaS based systems allowed for easier training and lower learning curves for employees.

## Biarri empowering you through the cloud

Biarri was established in 2009 with the mission to provide accessible business optimisation to all clients regardless of size or budgets. We develop bespoke SaaS based solutions for you, with you allowing your solutions to meet your specific requirements.

We have been able to develop a bunch of applications for our clients to suit their specific; Advanced Planning and Scheduling, Workforce Management, Business Analytics and Supply Chain needs.

Get in touch and see how you can benefit from our solutions today!

# COTS (commercial off the shelf) puts the bars around accessible Mathematics: leads to crying babies

Our philosophy is to make the power of mathematics accessible. Why? Because we think it isn’t currently very accessible, this limits the number of people who can use it to get value and reduces the value derived by those who do use it, and that’s a crying shame in a world that desperately needs efficiency.

We have all seen it multiple times in multiple organisations. It’s the hard to use (probably ugly), not really fit for purpose (lots of workarounds), complicated IT (n tier, client/server, VM, Citrix, Oracle thing) approach to providing optimisation software.

So how did it come into being? Here’s how I see it:

### “I’m unique; give me your shrink wrapped product!” – and other amusing procurement stories

Let’s assume requirements are done, I’ll save organisational scope bloat for another time. The next question is build or buy? How will we best get something that is a close match to need/requirements?

So a market search ensues only to discover that the requirements are pretty unique. So a custom/bespoke solution is required! That makes sense but most organisations quickly discover that bespoke = expensive (time and money), just like buying a tailor made suit is more expensive than buying off the rack.

It’s for this reason that hard core mathematics/optimisation solutions have mainly been consumed by capital intensive industries where spending a few million to save tens or hundreds of millions made the business case stack up.

Therefore organisations often seek a COTS (Commercial Off The Shelf) solution (often after an expensive run in with a bespoke approach), with the expectation that if they specify what they need and buy something “off the shelf” that fits then it should be low risk (time and money). It appears to be quite an entrenched view with Australian CIOs, and in some cases is justified, particularly in back office functions that don’t offer opportunity for differentiation. A point Wesfarmers Insurance CIO David Hackshall and DoD CIO Peter Lawrence make in an article by Brian Corrigan on itnews.com.au titled “How COTS became Australia’s default software setting”.

In the world of mathematics, optimisation and advanced planning and scheduling it would be a very rare occasion with a simple set of generic requirements where COTS really worked. Take one of the classical problems where mathematics are applied, vehicle routing. This is a well picked over area and sounds simple enough. Nonetheless, vendors fill niches within this niche in order to provide a match to requirements. As the Vehicle Routing survey in February 2014 issue OR/MS Today says “VR customers are different, and so are their routing needs and problems, which require flexible, innovative answers”.

Vendors react to this COTS centric procurement environment in a predictable way, and of course say they sell COTS because otherwise when they get evaluated on the inevitable RFX criteria they would fail miserably. The solution? They will (and I’ve been there) include “configuration” or “installation services” as ways to mask software development. The result? You get something that wasn’t a great fit with lots of add on development to meet your requirements. It’s hard to use, slow and doesn’t really provide the solutions you were hoping for. In many cases you end up with the worst of both worlds, the cost of bespoke but the poor fit of COTS.

As the aforementioned itnews.com.au article says “The middle ground between buying readymade software and building bespoke solutions is to customise a COTS package. Yet as many CIOs have discovered at great cost to their budgets and mental health, this can be a painful experience.”

This COTS/bespoke paradox is the problem we saw and it is what we aim to address. So what does Biarri do differently? We take the benefits of bespoke and make it cheaply and quickly. You could say we aim to provide the best of both worlds.

## Do the math

How do we do it? First of all, we do the maths first! Prove you can solve the underlying problem and that’s it is worth solving before investing in the delivery mechanism. Once you know there is value in the maths, make sure people can digest it via a well-designed solution. The Biarri Workbench is our SaaS platform that allows us to very quickly develop easy to use, custom applications with unique workflows with an iterative/agile and light deployment.

## Who says B2C owns good UX?

Easy to use means designed with the user in mind. In the consumer world (B2C) this is the natural order of things (thanks Apple). In the business world (B2B) this has taken a back seat, and that’s where our industrial designers come in. Working with users to really understand how they do their job and will interact with the system. Producing mock-ups/concepts and getting early feedback before a line of code is written.

So now we’ve proven the maths will provide value and designed a solution that users will love to use.

## Rinse and Repeat

What comes next is turning this into reality quickly, cheaply and iteratively. Quickly and cheaply are thanks to the Biarri Workbench providing security, common database, existing UI components, libraries and widgets that enable a custom built application to be constructed very quickly. And “iteratively” is thanks to being web delivered which means we can provide early access to users to start providing feedback. Agile development takes on real meaning as users see the mock-ups they helped design come alive in their web browser mere weeks (or even just days) after designing them. Engagement and user buy-in are huge as feedback is provided, incorporated and delivered instantly. Australia Posts CIO Andrew Walduck understands this approach, “The number of times I’ve seen operating models where you start with requirements on one side, you dump it into operations on the other, and it fundamentally misses the point”.

## It takes different strokes to move the world… yes it does

Do you remember the late 70’s early 80’s TV series “Different Strokes”? I use to love the theme song.

Everybody’s got a special kind of story
Everybody finds a way to shine,
It don’t matter that you got not alot
So what,
They’ll have theirs, and you’ll have yours, and I’ll have mine.
And together we’ll be fine….

When you start looking for your next optimisation, analytics or advanced planning and scheduling solution and your CIO/CFO says “budgets are tight and you can’t buy bespoke, you have to go COTS”, remember “it don’t matter that you got not a lot… you’ll have yours” because Biarri has a special kind of story.

## The Future of Research in Operations Research

New technologies, particularly Internet-enabled technologies like cloud computing and social collaboration tools, are changing how mathematics is done.  This post showcases some of these emerging trends and lays out a vision for how the research and practice of Operations Research in particular could be much more effective.

Consider one of the most famous mathematical theorems of all: Fermat’s Last Theorem.  This theorem can be easily stated and readily understood, but has been truly devilish to prove, so much so that it lay unsolved for 360 years.  Finally in 1993 Andrew Wiles provided a 108-page proof after 7 years of effort.  This is a very “traditional” way of doing mathematics (albeit, pure mathematics), where a “lone wolf” makes a large single contribution; but it also hints at the limits of what one brain can achieve.

By contrast, there is an equally old conjecture, from 1611, that has also only recently been proven.  This is Kepler’s Conjecture, which states that the most efficient way to stack spheres has 74.048% density.  An example of this “packing” is hexagonal close packing, which is the way you typically see oranges stacked in a shop.  In 1998 a proof emerged for Kepler’s Conjecture which required the solution of about 100,000 Linear Programming problems by computer.  This shows the emergence of a different model for “doing maths”, that is applicable even for purposes of obtaining mathematical proofs.

It is no surprise that computers have become indispensable for mathematics, particularly in applied mathematics in general, and in Operations Research in particular.  What is more interesting is the emerging role of other new technologies on how mathematics might be done.

These trends include:

•    More data (e.g. availability of spatial data via Google Earth; the prevalence of data mining and “big data” analytics), faster machines (the well-documented exponential growth of computing power), and more machines (we are rapidly approaching the age of ubiquitous computing).
•    Internet-enabled technologies such as mashups, Software as a Service, open APIs, cloud computing, web 2.0, and social network integration.  Browsers are prevalent and using web and mobile apps instead of desktop apps – even for complex tasks like image editing – is now commonplace.  These newer trends all lay on top of the benefits and behaviours we have already become accustomed to from the Internet, such as: increasing market efficiencies (e.g. sectors that approach perfect information, such as hotel and flight bookings, and connecting businesses more directly to customers and suppliers), the commoditising of products (e.g. the so-called “race to the bottom” for pricing), the “long tail” (disproportionately favouring small businesses through the power of search e.g. niche goods and services); hypercompetition (where it becomes more difficult to monopolise by keeping customers ignorant of alternatives); and the abolition of geography (the so-called “global village”).
•    Competition, crowd sourcing and collaboration – e.g. log-in style tools which let people from anywhere collaborate together; there are many examples of competitive crowd-sourcing mentioned below.
•    Not to mention the remarkable explosion of knowledge enabled through Wikipedia, Google, etc.

There have been some astonishing examples of how these trends have been put to use, both in academic and commercial contexts:

•    SETI@Home – a large-scale use of distributed computing over the Internet for research purposes, analysing radio signals (using digital signal processing) to look for signs of extra-terrestrial intelligence.
•    The Folding@Home project – using a statistical simulation method for protein folding.  This used consumer computation (such as Playstation 3 cores and GPUs) to achieve millions of CPU days in short periods of time; it was much faster than the platform powering the SETI projects, and one of the largest scale distributed computing systems ever created.  In Jan 2010 it simulated protein folding in the 1.5ms range – thousands of times longer than previously achieved.
•    Netflix prize – a global competition for collaborative filtering of movie recommendations.  The competition was started in October 2006, and by June 2007 there were 20,000 teams competing.  The Netflix grand prize of \$1m was eventually awarded in September 2009. The training data had 100 million ratings over 17,700 movies, and the winner had to beat Netflix’s Cinematch algorithm (RMSE .9514) by at least 10%.  Netflix’s own algorithm was beaten within 6 days of competition start.
•     Other examples of competitive or prize-driven activity: Millenium problems, 99Designs.com, Kaggle.com – like the Netflix prize, many of these would not be possible at large scale without internet enablement.
•    Human Genome Project – was a good example of exponential progress, where the pace  far outstripped predictions.
•     Citizen Science, arXiv and various open science initiatives show a trend towards more open and collaborative approaches to science.
•     Polymath project – in a 2009 post on his blog, Timothy Gowers asked “is massively collaborative mathematics possible?”. This post led to the collaborative solution of a hard combinatorics problem through blog comments, in around 7 weeks. The success of the so-called “Polymath1” problem has spawned additional Polymath collaborations.
•    In 1999, Garry Kasparov played and eventually won a game of chess against a “World Team” which decided its moves by the votes of thousands of chessplayers, including many rank amateurs.  Kasparov called this challenging game “the greatest game in the history of chess”, and co-authored a book as a result, constituting the longest analysis of a single game of chess.

These examples of massive collaboration and computation are not isolated curiosities, but the leading edge of a new approach to solving problems, that shows how knowledge and creativity can be harnessed by new technologies.  They show that there is an emerging new way of working which can be surprisingly effective.

Consider now the typical Operations Research researcher/academic (and, to a lesser extent, many OR practitioners):

•    They care about the HOW: algorithms, parameters, results
•    They want to collaborate with colleagues
•    They want to replicate and build upon the work of others
•    They’d like their contributions to be recognised – in both senses of the word: both visible, and valued.

The usual current way of working, through journal paper submissions, does not necessarily cater well for these needs.  Why not?  Consider a typical OR paper, which might contain an Abstract, a Literature Review, a Statement of a Problem, an Approach, some Results and experiments, and a Conclusion and References.  Such papers have some good properties: they are generally self-contained (you can find a lot out about the given problem once you’ve found the paper); they’re peer-reviewed, and – tellingly, perhaps – they’re easily replicated or extended by same author(s) (for their next paper!).  However, they are very often:

x    Difficult to search for and find – papers are often held behind pay-walls or restricted access (e.g. in University collections).
x    Hard to reproduce the results – in Operations Research, implementation details (parameters, source code, data sets) are important, but these aspects are typically not peer-reviewed – so how can the computations be verified, or replicated by others?  They fail Karl Popper’s basic test of falsifiability in science.  Even when source code is made available, it has likely been run in a completely different environment, making proper comparison of results or run-times difficult.
x    Long gestation times – the publication cycle from submission to publication is often extensive, and includes opaque periods when papers are reviewed.  This tends to encourage a culture of “the scoop” – a pressure to publish a new result before someone else.  Moreover, arguably the entire publication environment is self-serving and institutionally locked in (by universities and journal publishers).  Neither is it very democratic; when students and researchers leave academia and become OR practitioners, it can be difficult to keep close ties.
x      “Soft” links – the references in papers are usually simply textual, not hypertext links, even when published electronically.

The OR community prides itself on the Science of Better, of finding optimal or near-optimal ways to improve the world, but its own internal mechanism for improvement is itself inefficient (sometimes highly so).  In particular the process does not much allow for micro-contributions; the slow journal-focused approach inhibits rapid incremental development (which is somewhat ironic, considering that many papers offer slight improvements upon previously published results).  Nor does it make it easy to discover the state of the art for a particular problem – e.g. what are the best algorithms, the best test instances, and the best known results on those instances.  This must often be laboriously uncovered: even when the papers embodying the best known results are found, the associated data sets are non-standardised and often held on someone’s home page website; there is no standard code (and very often, no code at all) for reading in the data sets, and sometimes interpretation of the data is needed as documentation is lacking.  Certainly there are isolated but laudable aspirations towards “Reproducible Research” in science, but it has not been generally adopted by the OR community.

The question therefore becomes: How can we harness new technologies for Internet-based collaboration and crowd-sourcing, to do Operations Research more effectively?  What if we could complement (or even avoid!) slow journal-focused progress?  How can we improve upon silo’ed academic results and source code?

We can remove IT burdens by using the power of cloud computing – delivering powerful computational and collaborative capability through the browser, being flexible about how the underlying computational power is employed, and storing the data in the cloud.   Software as a service, or open-sourced, releases us from capital cost investment (which also allows us to switch more easily and not get too “invested” into one way of working); instead of the pursuit of mathematics research getting buried under software installation, environment issues and management (the tail wagging the dog), we can again make computation serve us.  Using servers in the cloud is usually not time-shared (as in the bad old days) – in fact computation power is generally cheap enough that we can now even be wasteful of server resources.  We can “spin up” compute instances and pay per hour if we like.  A good example is Gurobi, the leading commercial LP/MIP solver, which now offers Amazon EC2 instances prebuilt with Gurobi which can be essentially “rented” by time (in small or large amounts), so that no installation is necessary.

We can share models and data by using versioned wiki-style tools that store models and their results in the cloud.  I lay out a detailed vision for how this might work later.

We can aim to build an online, global OR community that collaborates frictionlessly, using forums and social network integration on a common platform.  Log-ins would facilitate a user and user group system; websites (such as StackOverflow and its ilk) also show that user reputations can work.  Particularly, a common platform can also build bridges between academics and practitioners.

If these aims seem ambitious, it’s worth pointing out that many of the pieces of the puzzle we need are already (conceptually at least) in place:

•    www.or-exchange.com / math.stackexchange.com
•    OR Librarywww.exampleproblems.com
•    www.wikipedia.org / www.proofwiki.org
•    Gurobi via AWS + Python
•    MathML / MathJax
•    Tons of modelling languages (Mosel, AMPL, etc)
•    Online IDEs – e.g. CodePad, Eclipse Orion, etc.
•    Collaborative referencing tools e.g. www.mendeley.com
•    Cloud storage and cloud platforms (examples are Google App Engine, Heroku, etc)

Some of these examples show skeletal websites whose communities have not really “taken off”.  Part of the reason is that they are not open enough – for example, OR Library, probably the biggest resource for OR data sets on the Internet, currently resides on someone’s home page – it is not editable (you cannot add a data set for a new problem for example), or integrated (even with basic links) to the associated results or source codes; nor does it allow for comments or discussion.  Other tools lack adoption simply for not being powerful, fully-featured or wide-ranging enough to attract a broad user base.

So we have tools which form the pieces of the puzzle, they just need to be put together well to serve a more ambitious purpose.  What is this ultimate purpose? I claim that our goal should be global crowd-sourced Operations Research in real time.

Here’s one way that it might work.  In the pictures I’ve cheekily called this rough mock up of such a platform “OR 4 ALL”.

The different types of pages are:

A Problem page describes a mathematical problem, for example, “The Capacitated Vehicle Routing Problem”.  These pages will look similar to the Wikipedia page, except that they will use a math-aware markup language to facilitate easy authorship (Wikipedia pages use images for equations, which provides a barrier to making micro-changes).  These pages will provide links to the associated data set pages, and the associated algorithm pages (see below).  These links will likely be ranked in some way (for example, the algorithm that has the most community votes might rank first, and there might also be an automatic ranking of the algorithms that have produced the best known solutions to the widest range of data sets).

Algorithm Approaches pages describe an approach to solving a problem.  Here there might be also be a discussion of the parameters and their effects, and perhaps some proofs that are specific to this algorithm.

Data Set pages list the data set instances for a given problem, and include links to associated results.

Implementation pages hold modelling language source code.  Python would probably work best for this.  The page would contain an online IDE (essentially, allowing the users to code right inside the browser).  Crucially, however, there is a Solve button, where you can apply the implementation to a data set, and the system will run it for you and compose a Results page with the solution.

Results pages, like the other pages, include history and versioned snapshots.  However, for this page it links right back to the implementation as it was run, in a timestamped fashion.  That way, if the implementation changes, the result can still be replicated.

Hence, you do not need to leave the OR4ALL web application in order to go right through from composing an algorithm, implementing it and running it, and comparing its results to the state of the art.

To various extents the different types of pages will have the following features:
•    WYSIWYG editing
•    versioning
•    API access
•    voting
•    social media integration

The arrows shown in the diagram are typically “one to many” links.  For example, there will be many algorithmic approaches to one particular problem.  Similarly, for any one problem clearly there are also many data sets.  Each data set/algorithm combination can be run many times with small parameter or implementation changes, giving many result sets.

All changes and contributions are versioned, timestamped and archived.  Contributions are attributed to the appropriate user.  The entire system is searchable, with everything linked.  It allows for rich content, e.g. videos and animations where there is a pedagogical angle, and graphs and other analytical components where there is a data visualisation need.

The data would be also able to be accessed – potentially in unforeseen ways – via open APIs.  That is, other completely separate systems would be able to repurpose the data programmatically; as an example, a completely different web application could be devoted to analysing particular test instances and their results, and it would enjoy direct (read only) access to the required data via API to the OR4ALL system.

Cloud computation is achieved via open APIs.  The algorithms are proven and reproducible for given data sets, so practitioners or others looking to build further on an approach do not have to start from scratch; similarly, practitioners have a starting point if they want to expand or implement an algorithm in their own software.  Furthermore, other, non-default computation engines can potentially be “plugged in” behind the system (perhaps specific ones aimed at particular requirements, e.g. massively parallel dynamic programming).

Clearly a platform like this would be – at a minimum – an amazing boon for students.  But all users would benefit from the open, “many eyes” style of model and implementation development – the phenomenon that “somewhere out there, someone can easily help you”.  In particular, researchers and practitioners can find areas of common interest.

The system as described can start simple but grow in sophistication.  Reputation points might be gained for best solutions, with user points gained from previous best solutions decaying away over time.  Companies can offer bounties for the best solution to their difficult data set.  There would be plenty of opportunities for 3rd parties to provide a marketplace for alternative computation or analysis plug-ins via the APIs. Or, the system could potentially be monetised via advertising – for example, commercial OR companies could proffer their own software packages and services, with these ads appearing only on pages associated with that company’s specific areas of expertise.

Examples of the everyday use of this kind of system in practice might be (from a typical user’s point of view):
•    I get a notification email one day that someone is trying a new approach to solving a problem on a data set that I recently optimised;
•    I get a small boost in reputation because users upvoted an approach I suggested to someone else’s algorithm;
•    I use a proprietary piece of software (completely outside of the OR4ALL) system to find a new, closer-to-optimal solution to a hard problem, and upload it.  It ends up in the linked Results page for that problem, with an icon marking it as “unverified” (as it was run outside the OR4ALL computation framework);
•    I upload a teaching video to an Algorithm Approach page to help the students that Iam teaching.

Clearly there are some potential challenges to be overcome: the signal to noise ratio for one (keeping unhelpful contributions from polluting or distorting progressive development).  A system like this would also likely enforce standardisation of formulations, and to a lesser extent implementations – although this arguably has more benefit than downside.  Many in academia might also find it hard to move away from “research grant and published paper”-driven development, or otherwise be uncomfortable with voting or reputation-based status systems; clearly we would need appropriate and effective metrics to measure the contributions by each user or user group.

I firmly believe that this way of working will come about no matter what; I’d be very surprised if in 10 years time this more collaborative, technology-enabled approach had not overtaken journal-based research.  So wouldn’t it be better for the OR community to be self-directed and evolve in a very aware way towards this goal?  It might not even be particularly hard to build – as we have shown, most of the pieces are already proven possible in some form or other.  The possibilities and benefits of the system I have described are exciting and endless – if there was a truly open and powerful platform for OR with good community adoption, it would attain uses and purposes which we can now barely imagine.

Reinventing Discovery – a book about internet-powered collaborative science
Wakari – a new, hosted Python data analysis environment that has some of the features described above

## Using libCurl to make web requests from a C++ program

This blog post is to share how to make a C++ program using the libcurl C interface under Windows/Visual Studio, as it is a little non-trivial.

The libcurl API lets you transfer files to a server and call exposed functions outside of a browser context. Because we are using CherryPy with basic user authentication as our web platform, I needed to be able to call various functions such as the one I’m using in this example: “get_result”. The equivalent Linux curl command line would be:
 curl --user user:password "httpss://www.insertyourURLhere.com/get_result?id=30417&task=9332" 

where obviously you’d change “insertyourURLhere”, use the appropriate user name and password, and I’ve assumed “get_result” takes two arguments, “id” and “task”.

On Windows with Visual Studio, after downloading the libcurl source, you need to first build the libcurl lib. To achieve this, you need to change the C++ Preprocessor definitions by removing “_USRDLL”, and adding instead “CURL_STATICLIB”.

Now, in your own program, add the C++ preprocessor definition “CURL_STATICLIB”, link with libcurl.lib as well as with ws2_32.lib, winmm.lib and wldap32.lib. This will avoid link errors.

The following C++ code shows how to make requests to the web server “get_result” function, using HTTP POST. It demonstrates using a callback function (to retrieve the text of the server’s response), passing the URL encoded arguments (encoded using curl_easy_escape), authenticating using basic authentication, and gathering timing diagnostics.


#include "curl.h"

using namespace std;

#define YOUR_URL "https://www.insertyourURLhere.com/"

static string gs_strLastResponse;

// Callback to gather the response from the server.  Comes in chunks (typically 16384 characters at a time), so needs to be stitched together.
size_t function_pt(void *ptr, size_t size, size_t nmemb, void * /*stream*/)
{
gs_strLastResponse += (const char*)ptr;
return size * nmemb;
}

bool CallServerWithCurl(string strData1, strData2, string& strErrorDescription)
{
CURL* curl = curl_easy_init();
if (curl == NULL)
{
strErrorDescription = "Unable to initialise Curl";
return false;
}

curl_easy_setopt(curl, CURLOPT_URL, (YOUR_URL + "get_result").c_str());
curl_easy_setopt(curl, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
curl_easy_setopt(curl, CURLOPT_USERPWD, MY_USER_AND_PWD);    // set user name and password for the authentication

char* data1 = curl_easy_escape(curl, strData1.c_str(), 0);
char* data2 = curl_easy_escape(curl, strData2.c_str(), 0);
string strArguments = "id=" + data1 + "&task=" + data2;
const char* my_data = strArguments.c_str();

curl_easy_setopt(curl, CURLOPT_POSTFIELDS, my_data);
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, (long)strlen(my_data));   // if we don't provide POSTFIELDSIZE, libcurl will strlen() by itself

curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);     // enable verbose for easier tracing

  gs_strLastResponse = "";
  curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, function_pt);        // set a callback to capture the server's response

CURLcode res = curl_easy_perform(curl);

// we have to call twice, first call authenticates, second call does the work
res = curl_easy_perform(curl);

if (res != CURLE_OK)
{
strErrorDescription = "Curl call to server failed";
return false;
}
if (!DoSomethingWithServerResponse(gs_strLastResponse))
{
strErrorDescription = "Curl call to server returned an unexpected response";
return false;
}

// extract some transfer info
curl_easy_getinfo(curl, CURLINFO_TOTAL_TIME, &total_time);
fprintf(stderr, "Speed: %.3f bytes/sec during %.3f seconds\n", speed_upload, total_time);

curl_easy_cleanup(curl);

return true;
}


Libcurl building and linking tips were found at http://curl.haxx.se/mail/lib-2007-04/0120.html and elsewhere.

## Tech Start Up Business Lessons

Biarri commenced as a commercial maths start up almost two years ago. In this time we have learnt a lot.
We focus hard on deploying optimisation and quantitative analytics in accessible ways to deliver the power of mathematics quickly and cheaply. Our just launched workbench (www.biarriworkbench.com) is a great example of this – providing monthly low cost rental of powerful maths engines available over a browser.

While we have been building products and models we have also been building a business and have learnt a few things along the way. Below are a few of the business lessons we have learnt growing a tech start up in Australia.

1. Be in the cloud – because we were delivering our optimisation workbench using the cloud, we sought out cloud services for our internal business needs. Our accounting software, CRM, email and timesheets are all rented from Software as a Service companies. We learnt a lot about what makes a good web app by using these services and we saved a lot of capital cost upfront. Specifically let me say that SAASU (www.saasu.com.au) is a really good accounting system for a small business – much easier to use than MYOB or quicken in my view.
2. Always push back against one-sided contract terms from big corporates – we find almost always you will get at least what you ask for. In house lawyers and legal departments will always try it on, especially when they are dealing with a small business – push back hard there is always some flex
3. Not all phone companies are the same – one large Australian telco sells conference calling enabled handsets while their network does not support conference (e.g. 3 way) calling. This is not disclosed up front- we found out when we tried our first conf call. Ask the question, be wary of penguins and remember Skype is your friend.

Hope these few thoughts help. There is more we are learning each day so will stick up some more thoughts soon.

## The Launch of Biarri’s WorkBench

With the impending launch of Biarri’s workbench and our ongoing close relationship with Schweppes for the daily routing of soft drink deliveries (an application of perhaps the most well known operations research problem: the vehicle routing problem), I thought that the following excerpt from a journal article submitted to the Asia Pacific Journal of Operations Research would be a very timely blog post.

The journal article is entitled “Real-Life Vehicle Routing with Time Windows for Visual Attractiveness and Operational Robustness” and it describes the vehicle routing algorithm we have implemented for Schweppes.

The excerpt details a specific example encompassing two things we are very passionate about at Biarri. First “Commercial Mathematics” – that is making OR (well not strictly just OR) work in the real world. And second, the revolutionary capabilities that the advent of cloud computing has for the delivery of software.

“Vehicle routing problems manifest in a remarkably wide range of commercial and non-commercial enterprises. From: industrial waste collection to grocery delivery; underground mining crew replenishment to postal and courier collection and delivery; inbound manufacturing component transportation to finished car distribution; in-home primary health care delivery to pathology specimen clearances from surgeries for analysis; and from coal seam gas field equipment maintenance to beverage distribution, to name but a few.

Automated planning systems used by industry at present are predominantly client-server or desktop based applications. Such systems are often: expensive, requiring a large upfront capital investment; accompanied by a large software deployment project requiring initial and ongoing IT department cooperation; customisable to a particular organisations requirements, however commonly retain a large amount of exposed functionality due to the breadth of the existing client base; and require substantial user training as the workflow is usually not restricted in a linear fashion …. Each of these characteristics constitutes a barrier to adoption of automated planning systems, and for most small to medium enterprises these barriers prove insurmountable.

With the advent of cloud computing and software as a service (SaaS) these barriers are being removed. SaaS: embodies a different commercial model; has essentially no IT footprint; mandates (as vendors may never directly interact with potential clients) simple intuitive linear workflows; and involves almost no end user training beyond perhaps an optional demonstration video.

The emergence of this new avenue for the delivery of optimisation based planning systems heralds, a heretofore, unparalleled opportunity for operations research practitioners to engage with a wider potential consumer base than ever before. However, the nature of the delivery mechanism requires the algorithms developed: to be robust and flexible (within their domain of application they must be capable to dealing with a wide range of input data); to have very short run times (the user base is more likely to be under time pressure than ever before); to produce high quality solutions (noting the inherent trade off between run time and solution quality); to be wrapped in a simple linear workflow (meaning it is always obvious what the next step in the planning process is); but above all, be able to produce real-life, practically implementable solutions, without the need for user training and/or experience.

For pure delivery, or pure pick up vehicle routing applications, real-life, practically implementable solutions are often synonymous with geographically compact, non-overlapping routes with little or no intra-route cross over. There are numerous reasons why such solutions are preferred …. If a customer cannot be serviced at the preferred time (e.g. the vehicle cannot get access, the customer is closed, another delivery is taking place, the customer is too busy), because the route stays in the same geographical area, it is easy to return to the customer at a later time. During busy traffic periods drivers are loathe to exit and re-enter a motorway to service individual customers. Even though such customers may be enroute to the
bulk of the customers the route services, thus incurring a minimum of additional kilometres, they may nevertheless be far from the majority of the customers the route services. If there is severe traffic disruption, it is easier to use local alternate routes between customers in a route that is geographically compact to ensure that pick-ups or deliveries can still be made. Third party transport providers, which prefer routes to be as simple as possible, may exert some influence over the planning process. Finally … it is easier to maintain customer relationships by assigning drivers to routes that routinely service a similar geographical area. In summary, solutions which are more visually attractive are more robust, and thus more likely to actually deliver the full extent of the cost savings that should flow from the use of automated planning systems.

This paper describes an algorithm for the vehicle routing problem with time windows, …. The algorithm is: robust and flexible; fast; wrapped in a user interface utilising a simple linear workflow and so requires no user training or experience; and produces high quality, visually attractive and practically implementable solutions.”

## Thoughts on Point Solutions

Lately I have been thinking a bit about the advantages of small, tightly focussed web apps (so-called “point solutions”) that scratch a single little itch, versus larger, more powerful and general web apps that tend to deliver more of a total body rub. This question is of utmost importance to a company like Biarri that needs to place its development time and effort into the best channels.

The question was highlighted by a real-world problem a colleague posed recently: how to assign foursomes in rounds of Golf so that all of the players got to play with each other player at least once. It is not trivial to construct such a solution (if one even exists) by hand, if the constraints are “tight” enough (for example, 20 players and 8 rounds).

Small point solutions that solve a small but non-trivial problem like this might be fairly quick to develop and deploy on the web. But it doesn’t take much feature creep before you get a pile of extra “features” (particular requirements for some players, minimising the number of repeated pairings, right through to printing out score cards etc); before you know it (or more precisely, after months or years of hard coding) you’d have a full-blown Golf Tournament Scheduler. Such a web app might sell for much more, but would probably attract many less customers. And what happened to the poor casual golfer or golf tournament organiser on a shoestring budget who just wanted to solve his or her original golf player assignment problem?

In the spirit of acknowledging that the future is impossible to predict, I think Biarri must address more wide-ranging, lightweight “point solutions”, particularly at our fledgling stage. More mini-apps with a wider potential customer base will allow us to gauge which itches need the most scratching; more complex apps, as every seasoned developer knows, seem to always cause issues and problems – in short, sheer complexity – quite out of scale with the larger code line count; not to mention being harder to use and understand for users (more buttons!)

Those who have test-driven our Workbench solution will also know that, to some extent, we’re trying to have our cake and eat it to, by allowing these smaller “point” solutions to exist as workflows (standalone web apps) in their own right, whilst also being “nestable” – that is, able to be composited in a larger, more powerful workflow. Look out for Geocoding as a sub-workflow inside Travel Time Calculation, coming to the Workbench very soon. And who knows if the Biarri Golf Tournament Organiser will ever eventuate!