Sales forecasting tool

Sales Forecasting Tool to Plan Accurately With Some Simple AI

Throughout my career I’ve worked with salespeople, as a salesman, and in roles supporting sales activities. Sales is one of the most important functions of a business as without sales, you have no business, no matter how great your product or service is. 

Sales is the fuel for any business to survive and thrive and this makes planning and forecasting sales one of the most important activities a business does.

That’s why running a business without effective and accurate sales forecasting is a bit like flying a plane without a fuel gauge. Of course, an accurate fuel gauge is not necessary or sufficient for generating or maintaining lift – the Wright brothers got away without one. But there’s a reason why modern planes have them – it gives pilots access to data to make the flight decisions to get from A to B. 

So how are you flying your venture?

Probably the same as most other businesses. 

You gather your sales team and ask them, ”how many sales will we have this year?” 

In the best case scenario, they review last year’s sales and make a guess based on gut feeling and intuition (which is not always wrong). 

Commonly enough though, a misalignment of incentives and company sales culture can manifest as a mismatch between targets (optimised for remuneration incentives) and forecasts (optimised for accuracy).

We can do better. 

And to do this we need to use data. But why is data so important?

According to the Professor of Digital Practice at QUT, Mal Thatcher, the 21st century will be the century where,

“By the middle of the century the only tangible asset on an organisation’s balance sheet will be data”

and this is true for your sales too.

To give you and your business a competitive advantage, we at Biarri have developed a simple, easy-to-use Excel sales forecasting tool for you. So it is time to become data driven now and with Biarri’s new tool this is extremely easy. 

Biarri has taken some basic AI techniques and put them into a spreadsheet that requires no macros, no plugins and nothing to install. The AI techniques in this Excel tool will help guide your sales team to make more accurate predictions for the coming year. 

You don’t need to be an expert in AI to leverage the tool. It does all of the hard work for you and provides you with data driven monthly predictions for the coming year based on quarterly sales patterns. You don’t need to know cutting edge AI to use the tool, just how to copy and base a small amount of data.

You can download the tool below for free. There is no need to leave your email address or anything. Biarri’s mission is to make the world more efficient via better decisions powered with mathematics and we believe this tool has the potential to make a difference for your organisation.

Your New Sales Forecasting Tool

Before you download the tool, it is worthwhile telling you what it is, and how to use it.

It uses historic data to establish a pattern and then extrapolates this pattern to be able to predict the coming year’s sales. 

Not only does the tool provide monthly predictions, it also takes into account quarterly sales cycles. Forecasting quarter-by-quarter aligns it with typical quarterly reporting and also captures the variance in quarterly sales. This quarter-by-quarter approach is designed for industries like retail which have some quarters with greater sales (e.g. Xmas). 

There is also a “bad month flag”. This allows users to indicate if something bad has happened in the past during months (e.g. COVID) and if similar events are predicted to occur in the future (in the PREDICTIONS tab). 

This spreadsheet comes prefilled with data to show you what it should look like. To use it for yourself, remove the data from the Monthly Sales column in the Data tab and replace it with your own data. The calculations and updates will be carried out automatically. All other cells are locked for your safety. 

How do I use the sales forecasting tool?

The steps to using the sales forecasting tool are as follows:

1. Collect exactly 36 months of contiguous sales data leading up to the month you would like to predict from. E.g. if you want to predict the yearly sales from January 2022 until December 2022, then collect the 36 months of sales data from January 2019 until December 2021. The model is set up for exactly 36 months of data, not more or less.

2. Copy this sales data into the Monthly Sales column in the Data tab (in green). The top most entry should be the oldest (e.g. in the example in 1., January 2019) and the bottom most entry should be the newest (e.g. December 2021 in the example in 1.).

Sales forecasting tool monthly sales column

3. In the Data tab now enter the first month for the monthly sales data in the month tab by choosing from the drop down menu (this cell is green). Also choose the year in the year column from the drop down menu (this cell is also in green).

Sales forecasting tool month selection

4. For each month, choose whether a bad event occurred (or not) by selecting Yes or No in the Bad Event column. If normal trading and fluctuations were occuring, then put No. Otherwise, if something truly unusual (e.g. COVID) occurred that significantly impacted your sales volumes, select Yes on the months which were affected by this (this column is coloured green).

Sales forecasting tool bad event input column

5. Once this data has been entered, go to the Model Analysis tab to see the outputs of the model. In the Predictions tab, if you predict that there will be any bad months in the future, select Yes in the corresponding months in the Bad Event column (which is in green). For this to have effect, similar Bad Events need to have occurred in the past otherwise this will have no effect.

Bad event output column

6. Finally, your predictions are shown in a graph in the Dashboard tab, with a table showing the cumulative results for each quarter.

Monthly sales prediction graph

Download the sales forecasting tool by clicking on the button below.

What does Biarri do?

Most companies begin with Excel sheets like the one provided here to start making once off decisions on key parts of their business. It is like the first flight of a plane with an often inaccurate fuel gauge caused by data issues. At some point, organisations need to lift up from Excel to correct, secure, easy to use and more powerful tools and this is where Biarri helps.

Biarri’s main value proposition is to help clients realise operational excellence in the way they run their business via AI. The core of this is excellent, data driven decision making. Biarri catalyses AI driven business decisions by using its cloud based set of mathematical tools, the Workbench.

To discuss how you can leverage your data and turn it into value to reach new operational heights, reach out with the form below now.

Get in touch

  • This field is for validation purposes and should be left unchanged.

Agribusiness Optimisation Solutions

Maths and Machine Learning for Agribusinesses

Mathematics powered by computers is changing the world we live in. At Biarri we see this everywhere, across every industry, and I’m sure you do too. Recently we have delivered a number of Machine Learning and Mathematical Optimisation solutions for Agriculture businesses in Australia and were fortunate enough to be invited to speak at the recent Case IH agri-business conference in Mackay.

Ash Nelson, Biarri’s co-founder, presented on Maths and Predictive Analytics for better business decisions. He described how our everyday lives are being changed by corporations leveraging large data sets, advanced statistical analysis and powerful computing resources. Ash then outlined how these same set of technologies can be utilised to improve business decisions in agriculture. This includes optimising agricultural supply chains and port operations, reducing unplanned equipment failures by using intelligent predictive maintenance algorithms or to improve health and safety outcomes for farm workers by better identifying areas of best practice to inform injury prevention initiatives.

Are you interested in leveraging your data using advanced maths to make better business decisions? Don’t hesitate to get in touch with our friendly team.

Biarri and SaaS

SaaS deployments are now ‘mission critical’

Gartner recently published a survey citing that SaaS deployments are now ‘mission critical.’ Some of the key reasons behind this statement is that respondents looked for cost savings, an increase in innovation and accessibility to their systems as key drivers for the move away from local software solutions.

Joanne Correia, Gartner Research Vice President said,

“The most commonly cited reasons the survey found for deploying SaaS were for development and testing production/mission-critical workloads,” and went on to say “This is an affirmation that more businesses are comfortable with cloud deployments beyond the front office running salesforce automation (SFA) and email.”

This shows that companies are becoming more aware, and switched on to the benefits that cloud based software can bring to their company.

It was also demonstrated that on top of cost savings, accessibility, and innovation, SaaS based systems allowed for easier training and lower learning curves for employees.

“Non-IT professionals, often view the cloud strictly as a tool that they can use to reduce their operating costs,” and in turn effort.

Biarri empowering you through the cloud

Biarri was established in 2009 with the mission to provide accessible business optimisation to all clients regardless of size or budgets. We develop bespoke SaaS based solutions for you, with you allowing your solutions to meet your specific requirements.

We have been able to develop a bunch of applications for our clients to suit their specific; Advanced Planning and Scheduling, Workforce Management, Business Analytics and Supply Chain needs.

Get in touch and see how you can benefit from our solutions today!

 

What Google Does Right

I’ve appreciated Google’s mission and its modus operandi for a long time now.  I’ve avidly read Planet Google, many Wired articles, and a number of blogs and other pieces about the company.  But what I want to address here is how Google provides a great user experience, what enables it as a company to follow the path it does, and what smaller companies can learn from it.
Keeping it Simple
It’s easy to state but hard to do right, and often requires deep design to accomplish, but it’s one thing that Google does extremely well: it keeps its interfaces simple.  This ability is, for sure, enabled and exploited by the very nature of the company: it’s a web-based company through and through.  That means it can radically simplify so many things that mass consumer computing users find so hard: a big example being navigating and using a file-based storage system.  Instead, of course, everything is stored by Google (in its “cloud”, if you like) – and this simply obviates the need for a Save button, a Load Button, and all that junk.  Nor do you need any IT infrastructure to use most of Google’s products (email, for example).  By saving your documents or writing automatically – quietly and regularly, the way it should be done – the user never needs to even think about the where or how of storing data.  Except, that is, if you need to categorise – but here again Google makes sure that its core capacity – that of Search – is always front and center and powerful enough to find whatever you need.  A user experience should aim to empower the user, not baffle or frustrate them, and in this regard Google generally succeeds admirably.
The Power of Free
By providing many of its products free to the mass consumer market, Google owes its audience nothing.  This gives it free rein to change and improve (in short, to innovate).  By having lots of small but focused products, it can bring on or cull away products quickly (generally at the lightning-quick speed of the web world, and impressively fast for such a big company).  Here again Google understands right in its DNA both freemium and the web’s “Everything, free” tendencies.  Google is also very good at knowing what to keep hidden – its apps are great at hiding functionality that is less relevant to day-to-day usage from the user (they’re often there, but you have to dig a little to find them).
But Will It Scale?
Google as a company has shown an almost terrifying ability to grow, but to grow without collapsing under its own weight.  One way that they do this is by – in the main – using low cost easily available hardware (which has financial benefits as well as intangible benefits), even in huge data centers; a Commodity Computing approach (they even store their servers in shipping containers).  Development has an open feel to it, and is often open sourced or provides public platforms and APIs; Google Labs and techniques that expose Beta versions show Google developing software often in public view – compare to the secrecy that often surrounds Apple development.  Product support is often scaled by using open forums where members of the public helps each other.  Internally there is an almost astounding lack of management hierarchies.  In fact one could conjecture that Google is probably not really a big company as such, but a network of highly connected small companies that share common DNA and some common base technologies (often through open sourcing or open standards).  The shelter of the larger entity (not to mention its profitability) give it the ability to take risks – if one of the smaller companies/products fails, it can be easily absorbed.
Of course, Google’s flagship Web Search also scales (it has to, to have any chance of covering billions of web pages).  But interestingly, it seems to me that the success of Google’s PageRank algorithm – the core of it’s Web Search function – is largely because at heart the algorithm combines both the human and the machine in a very effective way – the human aspect is the importance of a page due to linking (a result of human activity) along with a series of quantifications (the rank).  But we are now also starting to see meta-data aware algorithms, that are getting nearer to natural speech, for example the Wolfram Alpha service (a so-called “computational engine”).  You can be sure that if Google truly cracks the problem of natural language search (which may or may not be equivalent to – perhaps a very dumb – AI), it will change the world (again!).  Indeed Google’s founders have stated that Google’s aim is to develop an Artificial Intelligence by way of Search, and there have been some startling successes: Google’s language translation service is apparently very good, and has resulted from a statistical approach enabled by massive data sets.
What Type of Company is Google, Anyway?
Google might be construed as an “information” company – after all, it wants to “provide access to all the world’s data”.  But there’s an important distinction to be made here – Google only cares about data insomuch as it is useful to someone (typically, consumers or businesses) – it does not care about information per se.  (That’s not to say Google will lose your data!).  The point is that Google is above all a technology company – it is enabling and automating the use of technology, predominantly software, but increasingly also hardware, to solve all sorts of engineering problems, and lots of data just happen to be the input.  Storing millions of search results, using millions of documents in different languages in order to automate translation, and many other examples support this view.  Google as a company is a master engineering problem solver, including solving some of its own internal problems.  Many of its products are happy accidents, or the results of its famous “20% time”, where its employees are given one day a week to pursue their own interests.  Google is like a giant R&D lab that also happens to be a corporation.  It also places huge importance on hiring the right people (smart ones), because it knows that great solutions come from clever minds – in fact CEO Larry Page personally signs off on every new hire.
What We Can Learn
The humble web start-up right through to the big unwieldy enterprise can learn much from Google’s approach, particularly if your products or services are targeting the mass consumer or massive business arenas:
  • Don’t discount the ability of technology to be a game changer.  Google has disrupted many industries.
  • User experience matters.  Strip away everything but the essentials to get the job done.  What’s left should work well.
  • Make sure your core product is healthy and pursue improvement and innovation as aggressively as you can.
  • Keep your technology and processes as open as possible.  Closed solutions harm innovation and sharing, which helps problem solving.
  • Scale through technology – automate as much as possible.
  • Give some of your product(s) away for free.

Further Reading: “Google Thinks Small” , Google’s “Ten Things We Know To Be True”, “How Google Works”

Cross Platform Development By David

I was involved in the development of a ‘simple’ application in C++ in windows and wanted to get it work in multiple versions of linux as well. By ‘simple’, I mean there is no windows GUI or links to other complicated third part libraries so a lot of the C++ should just port straight over to linux. Below are a few tips/lessons learned while I went about this task.

VirtualBox and Code Repository

I was working on a windows box and wanted to port to Ubuntu (a Debian flavour of linux) and Oracale Enterprise Linux (a Redhat flavour). The use of virtual technology is definely your friend here. As a result on my windows box I had virtual machines running Ubuntu and Oracle Linux Enterprise edition.

As we have a number of developers in our organisation working on a number of projects with libraries that are shared across several applications it is logical that we have a code repository. As I spend most of my time in a windows environment and am more familiar with it I prefer to do most of my editing in windows (the application was originally written in windows and most of our applications don’t need to work on other platforms).

I could have chosen to setup the code repository (in our case Mecurial) on each of the linux VMs. However, this would have been more time consuming and I don’t want to have to push code to the main shared repository and then pull onto the linux VMs everytime I want to test changes on other platforms, especially when the other platforms are all on the same computer. As a result, I set up shared folders in the VMs and made sure they pointed to the copy of the repo that I had in windows. Now I could easily make changes in windows and build and test them in all the environments.

Building and Testing

A couple of things to note is that while windows generally builds and runs quite happily without impacting what goes on in the linux world, the two linux worlds impact each other. This is because they both use the same makefile and clean/build binaries with the same names. I found that I had to be careful when building in different linux enviornments that the clean was done completely before the build started. The clean script when run in Ubuntu did not always clean/remove the binaries created in the Oracle Linux Enterprise environment. If the biaries didn’t get cleaned properly I got some build errors (e.g. /usr/bin/ld: cannot open output file executable_filename: Operation not permitted) or I got a segmentation fault at run time.

Different versions of g++ in Ubuntu (4.5.2) and Oracle Enterprise Linux (4.1.2) also meant that there were different compilation issus that needed to be dealt with but overall these were not too difficult to work through. Some of the issues resolved around simple compilation problems (e.g. g++ 4.1.2 was stricter with linking to libraries that started with the letters ‘lib’, 4.5.2 didn’t seem to mind as much).

Another main sets of issues were related to the use of third party libraries that we used (e.g. curl, coinosi). The coin library problems were overcome by simply ensuring the source code was downloaded, built and installed on the required linux platforms (i.e. no changes were required to the source code itself). Curl behaved a bit differently (i.e. it didn’t work for Oracle Enterprise Linux) but that was because a difference in the way different g++ versions treated the addition of strings. Once I made a small change to the code it worked fine.

The end result was that I managed to have the same source code build and run in 3 different environments: Windows, Ubuntu and Oracle Enterprise Linux.

js/css resource serving in python apps with Fanstatic

I’ve just been checking out Fanstatic, a resource publishing/static file serving solution for wsgi python apps. I’ve been contemplating something like this as our javascript and css dependencies are getting more complex. It would also be useful to have some form of automatic cache invalidation so users don’t have to do a special browser refresh when we update our applications.

It’s easy to set up with CherryPy

from fanstatic import Fanstatic

if __name__ == "__main__":

    app = cherrypy.Application(Root())
    app.wsgiapp.pipeline.append(('repoze.who', setup_auth))
    app.wsgiapp.pipeline.append(('beaker', setup_session_storage))
    app.wsgiapp.pipeline.append(('fanstatic', Fanstatic))
    cherrypy.quickstart(app, config='workbench.conf')

after that is done, you can jquery.need() in the widget/template that needs jquery, and similar for our other dependencies. Has anyone else used fantastic? What are other solutions to dependencies and serving js/css? Is there an easier and better solution? Wrapping new libraries for fanstatic looks like a bit of effort but I haven’t explored it much yet.

Loki

Editing vector layers with Quantum GIS

I’ve been looking around for an open source alternative to the excellent Desktop GIS Mapping Application MapInfo Pro. I’ve installed and played with both Quantum GIS (QGIS) and MapWindow. The latter is a little bare bones and does not seem to include any geometry editing, so I’ve been focussing instead on QGIS.

QGIS is a great application, though it is quite noticably slower to render very large mapping layers than MapInfo Pro. However – and this is, strangely, not very well known or documented – QGIS will only let you edit layers which are Shapefiles. There is no documentation that I can find which says why the “Toggle Editing” function is always disabled for other files – this is very confusing/frustrating if you don’t know about this Shapefile limitation.

There is a converter within QGIS (which is really just a UI on ogr2ogr) to convert between TAB and SHP formats, but Shapefile layers are rather limited because you can’t mix geometries within a layer – both nodes (points), and lines, for example. This is rather a problem because the map data I have has both points and lines – the points are needed to style the line endpoints and allow for data attributes on them. Separating points and lines into different layers in this context is a bit of a pain. Shapefiles also have various other limitations – for example, field names can be no longer than 10 characters.

Perhaps hooking up QGIS to an SQLite database with Spatialite extension might make managing the map data layers more streamlined – one for a rainy day…

Installing Cplex in Linux Ubuntu

I’ve been trying to install ILOG’s Cplex product on a 64 bit Ubuntu machine. There are a few small hiccups I encountered.

First, try to install as root with

./cplex_studio122.linux-x86.bin

On my machine I’m installing to /opt/ILOG/CPLEX_Studio122. If you get an error like “jre/bin/java: not found” then you need the “32 bit libs” package:

apt-get install ia32-libs

(You may also need to set the path with LD_LIBRARY_PATH=/usr/lib32). The 32 bit libraries seem to be required only for the installer (at least, they are not needed for programs that just link with the static Cplex libs).

After installing, you may get a build error (running as a non-root user) that it can’t find the cplex header files. Try ls /opt/ILOG/CPLEX_Studio122 and see if there are permission denied messages – the installation seems to screw up some permissions on this folder, however this is easily fixed with chmod +r /opt/ILOG/CPLEX_Studio122.

If you are using COIN-OR‘s Osi class OsiCpxSolverInterface you will also need the following at the top of your OsiCpxSolverInterface.cpp file:

#include “/opt/ILOG/CPLEX_Studio122/cplex/include/ilcplex/cplex.h”

A typical Makefile snippet which includes Cplex and COIN/Osi might then look like:

my_objects = YourFile.o OsiCbcSolverInterface.o OsiCpxSolverInterface.o

CPPFLAGS = -fPIC -I/usr/include -I/usr/include/coin -DNDEBUG -I/opt/ILOG/CPLEX_Studio122/cplex/include/ilcplex

LIBFLAGS = -l:libCbc.so -l:libCbcSolver.so -l:libCoinUtils.so -l:libOsi.so -l:libOsiClp.so -l:libClp.so -l:/opt/ILOG/CPLEX_Studio122/cplex/lib/x86-64_sles10_4.1/static_pic/libcplex.a -l:/opt/ILOG/CPLEX_Studio122/cplex/lib/x86-64_sles10_4.1/static_pic/libilocplex.a

my_program : $(my_objects)
g++ -Wall -fPIC -shared -o my_program $(my_objects)

.PHONY : clean

clean :
rm *.o my_program

MapInfo tips

I’ve had reason to manipulate some spatial data recently using MapInfo Professional, and picked up a few tips that I thought might be useful to someone else one day:

  • You can use SQL conditions that select objects by their spatial properties. Use syntax like: Str$(obj) = “line”, and Str$(obj) = “point”, Str$(obj) = “region” etc.
  • To change the projection, use File / Save Copy As and click the Projection button.
  • Get used to using the arrow keys to pan around the map, and the mousewheel to zoom in and out – this saves having to switch to the Grabber tool all the time to move around.
  • Press “S” to go into “Snap” mode. This is very useful when creating lines, to make sure they attach to nodes (note the nodes don’t have to be in the same layer when your lines snap). When in Snap mode you will see the text “SNAP” in the status bar at the bottom of the MapInfo main window.
  • The menu item Map / Change View is useful to go straight to a given lat/long coordinate. (It centres the view on your entered coordinate). You will need to be careful however if you have multiple layers open that have different coordinates or projections.
  • To merge two different layers:

* First, the tables have to have the same structure. Use Table / Maintenance / Table Structure to add and remove fields as required.
* Then use either Table / Update Column or Table / Append Rows To Table. If using Update Column on node/arc layers (to copy data from the node or arc objects of one table to the corresponding objects on another), I found using Join with Intersects worked best.

  • To separate a single layer that contains nodes and arcs into two layers:

* Run an SQL Select query with the condition Str$(obj) = “line” – this will select only the arcs.
* Then Save the selection to a separate file (which will be the arcs layer), and delete the selected arcs from your original layer (leaving only the nodes).

Cross-platform development

During the course of developing Biarri’s flagship Workbench product, we’ve taken pains to ensure that our (GUI-less) optimisation “engines” work well under both Windows and Linux operating systems (so-called cross-platform). This turns out to be relatively easy as long as you stay away from the big OS-specific frameworks (e.g. Microsoft’s MFC/COM/ATL etc). We’ve picked up some handy tips along the way, particularly applicable to C++ development, which are worth sharing here.

  • Be aware of differences in line endings – Windows uses carriage return and line feed \r\n, while Linux/Unix uses just line feed \n. (Note that Visual Studio will show files with Linux line feeds correctly, but Notepad won’t – this is one way to tell what line endings your file has in Windows). This can be particularly important when importing data e.g. into databases where the file originates from another OS.
  • Always use forward slashes for file paths, not backslashes. Also, file names and folder paths are case sensitive under Linux but not under Windows. And don’t assume there is a C: or D: drive!
  • You may have to be careful writing to temporary files and folders. In Linux /tmp is often used; in Windows /[user]/AppData/local/temp (location of the TEMP environment variable – e.g. type “%TEMP%” into the start menu or Windows Explorer). For Linux, it is sometimes necessary to manipulate a folder’s “sticky bit” to ensure that the folder is accessible by other users (e.g. a Postgres database user) – e.g. in Python:
os.chmod(temp_dir_name, os.stat(temp_dir_name).st_mode | stat.S_ISVTX | stat.S_IRGRP | stat.S_IROTH | stat.S_IWGRP | stat.S_IXOTH)
  • Be aware of the differences in file permissions in Windows and Linux. In Linux files have an “executable” bit. chmod a+x [file] makes a file an exe, which can then be run with “./filename”.

For C++ development:

  • Name all cpp and h files in lower case if possible. Files are case sensitive in Linux and this includes #include’s!
  • For compiling with GCC under Linux, the last line in a C++ file must be blank.
  • In Linux C++ programs, general exception handling with catch(…) does not work. You can use sighandlers instead (see this for example), though it’s not as good – it is more equivalent to an exit(), with a chance to clean up.
  • Beware doubles comparisons and inequality checking, at least in C++ programs. Always use a delta i.e. A == B may not be the case in both Windows and Linux if they are essentially the same number so use fabs(A – B)
  • Build tips for Linux: Type “make” when you are in the directory to build the project. This will search for a file called “Makefile” and run it. (Use “make -f filename” to make from a different makefile). To force a recompile you can “touch” a file using “touch filename”.
    To clean out all object files type “make clean” (as long as your make file defines what cleaning does…). Use “make -j4” to run make with for concurrent jobs, to take advantage of multicore.
  • In bash, to get a recursive line count of .cpp/.h files: find [directory] -type f -name *.cpp -exec wc -l {} \; | awk ‘{total += $1} END{print total}’