The silver lining on SA’s IT skills brain drain

SA’s senior IT skills are increasingly leaving the country, but this creates a series of new opportunities for local companies and IT professionals.

By Mari-Lize van Reenen, Director at Information Dynamics , the KID Group’s business information career management specialists.

Recent reports indicate that the South African ‘brain drain’ has gained momentum as skilled professionals look to better opportunities abroad, often driven by concern about South Africa’s economy, crime and other factors. Some reports put South Africa’s brain drain among the highest in the world, with Xpatweb reporting a sharp increase in the number of in-demand skills leaving the country. In multinational technology and consulting organisations, top South African candidates are increasingly being poached, transferred or promoted abroad within these organisations. Senior people are leaving the country to advance their careers after being affected by retrenchments or forced into early retirement. Not only does this exodus create immediate skills gaps – it also hampers mentorship and development of less-experienced staff.

The Brain Drain

 

In many cases, these key, senior ICT resources depart with little notice or time for effective succession planning. In our own organisation, we have seen it taking as little as 6 weeks from a candidate getting an offer abroad, to the time they leave the country.

For the ICT sector, already grappling with a lack of advanced next generation technology skills, particularly in fields such as robotics, AI and data science, this brain drain is leaving a significant skills gap and causing significant concern among companies that urgently need these advanced skills to innovate, grow their organisations and remain globally competitive.

There is a massive skills shortage for true data scientists and big data consultants with experience in HIVE, Spark, Kafka, NiFi and Ranger, to name a few. Many local organisations are seeking to fill gaps by recruiting from countries such as Zimbabwe, Nigeria, Iran and Poland, which offer excellent training in data science and data engineering. These candidates are typically highly skilled, passionate, driven and demonstrate a hunger for a career in ICT – making them a compelling proposition for local employers. However, importing key skills is often hampered by work visa hurdles. It also does little for South Africa’s ambitions to become a knowledge society, and empower a new generation of ICT skills and support the local economy.

While the quest for key ICT skills is a challenge, the situation presents a number of opportunities for South Africa to find innovative new ways to improve skills development, succession planning and employment conditions – all of which will benefit employers and employees in the long term.

The new gaps being created in the market are generating opportunities for people coming up through the ranks to grow into those positions.  Organisations should take the opportunity to identify key resources early on and put proper succession plans in place for key resources – not only to mitigate the risk of sudden skills gaps, but also to ensure sustainable growth in the workplace.

They need to significantly step up their skills development programmes, bearing in mind that many graduate programmes see candidates job-hopping for nominal increases, or simply for better working conditions, such as more flexible working hours. Employers should move to mitigate the risk of losing key resources by making an effort to understand what motivates their employees, and then moving to offer better working conditions, improved work-life balance, mentorship and more opportunities for personal and professional development. By doing so, organisations will not only improve their chances of retaining scarce skills, but will also improve working conditions for the entire company, with improved staff morale and bottom line as a result.

The data asset: What’s it really worth?

All data has intrinsic worth, but its real value lies in how an organisation uses it, and whether it is fit for purpose when needed.

By , Director of Knowledge Integration Dynamics (KID) and represents the ICT services arm of the Thesele Group.
images-37

Data is being widely described as the ‘new gold’ or the ‘new oil’, and the most valuable asset enterprises have. In many respects, this is true. But while the forward-looking enterprise depends heavily on its , putting a rand value to this data remains a challenge.

By its very nature, data has at its core informative, instructional and locational properties that make it the “glue” within any ontology or existence of anything.

It is fundamental for all business, institutional, governmental, household and public processes, systems or exchanges to operate, whether manually or automatically. Data enables , learning, design, knowledge, information transfer, actions or execution in all spheres of life.

As an enterprise asset, the type of data most frequently valued is franchised (aka “monetised”) data – that which has been processed to produce outcomes like metrics and insights that are important and valuable to the company and down-stream consumers for regulating and operating the business.

Pricing models exist to put a rand value to this data based on factors such as acquisition, and processing time, data volume, the number of inputs it had and the importance of the decisions it can inform.

Data enables communication, learning, design, knowledge, information transfer, actions or execution in all spheres of life.

However, most data valuation models are subjective and based on criteria that are not always standardised across industry sectors and disciplines; and in many cases, data is not even recognised as an asset for accounting, strategy and other purposes.

Where data is recognised as an asset with monetary value, valuations could be made based on the cost of managing and provisioning data, the volumes available for data discovery or analytical consumption purposes, trigger actioning dependencies, the data’s importance for decision-making, enabling or learning and knowledge transfer capability, and even the distance the data is transmitted.

Even data ‘placeholders’ or capacity can be priced and sold to enable communication before usage as evidenced with the telcos and data vendors of the world.

Although most of these data pricing models in existence are regulated by communications and government authorities, these models are by no means altogether perfect, and the perceived value and pricing criteria of these data ‘placeholders’ may differ from one community to another, with the entire model subject to the risk of monopolisation.

The value assigned to the data may vary depending on the sector the organisation operates in. For example, in the financial sector, the most valuable data is likely fashioned around bottom line or profit; for others, the key focus might be data relating to sales (revenue) or expenses. Data such as this may well be valued as an asset during the sale of a business, and due to its importance to the enterprise, it might even be insured against loss.

But what about the data relating to enterprise intellectual property (IP) – its algorithms, models and methods? Or its data currently in transit without context, or its historic data, which may not be in use currently, but could become vital for trend modelling. It is harder to put a rand value to IP and data which has not been used in years but has the potential to improve the business at some point in future.

Maintaining and increasing the value of data assets

All data has value, but without context and effective data management, it cannot contribute its full value to the outcomes of whoever is using that data.

It could be argued that analytics models can use weightings to overcome inconsistencies and gaps in data; however, the ideal is to not have the inconsistencies or gaps at all. To achieve this, organisations need to retain and properly manage quality data that has been verified, validated, cleansed, integrated and reconciled.

Effective governance should also be in place, to ensure and direct data management practices, with tried and tested rules, controls and architectures to assure commonality in practices/processes and sustained data quality and avoid the dreaded data ‘spaghetti junction’.

Even when quality data is available and well-managed, however, the value of this data remains subjective and theoretical – particularly when its future importance is not known yet.

Historic data, for many simply volumes in costly storage, is very important for those in the financial sector, for example, serving as irrefutable evidence when analysing the full lifecycle and long-term behaviour of customers, and building predictions informed by this.

Outside of the business world, data indicators going back millions of years are vital for fields such as astronomy, archaeology or geology.

Therefore, no matter what the current accepted data valuation models are, the answer to the question: ‘what is data worth?’ is that quality, well managed data is potentially priceless.

InfoFlow relaunches Informatica user group in SA

 

InfoFlow has reignited the Informatica user group for South African customers and partners.

Launched in Johannesburg recently, the user group brings together customers and partners to discuss use cases, case studies, challenges and opportunities using Informatica solutions.

Informatica is the world’s leader in enterprise cloud data management, with local customers harnessing its solutions for compliance, governance, analytics, big data management and master data management.

“Informatica usage has picked up in South Africa in recent years, as data is increasingly seen as the new oil,” says Veemal Kalanjee, MD of InfoFlow. “Many local organisations are refocusing their efforts to leverage data as an asset, and Informatica helps them execute on a data strategy.”

Kalanjee says the original Informatica user group was launched some 14 years ago but tapered off some years back. However, as data management becomes a key focus for local enterprises, InfoFlow took the lead in reigniting the group, to support knowledge sharing and collaboration in the local market.

Around 55 customers and partners, together with international Informatica representatives Kash Rafique, MD of Informatica and Gregory Anderson, the Regional Director for Informatica South Africa and Africa, gathered in Johannesburg last week to formally launch the group. A committee of five customers was elected to drive and manage the user group. Delegates at the launch also heard from key local Informatica customers Sanlam and the National Institute for Communicable Diseases (NICD), on their experiences in implementing Informatica solutions.

“While InfoFlow was the first local reseller, there are now a number of fellow resellers in the local market. We believe that driving the relaunch of the user group is important for us, fellow resellers and our customers alike,” says Kalanjee. “We find in talks with customers that there are many opportunities for them to expand their Informatica use across the enterprise, and to optimise their investments, so this user group is expected to stimulate collaboration and the sharing of knowledge among users.”

AI for enterprise: Get the basics right before making the move

Machine Learning and Artificial Intelligence (AI) are certainly the way of the future, offering enterprises faster, more accurate and more efficient way of automating processes than have ever before, writes Chris Pallikarides, General Manager, ITBusiness. Gartner reports that AI adoption in organisations around the world has tripled over the past year, with 37% of organisations having deployed AI – or about to do so. By 2021, Gartner expects AI augmentation to create $2.9 trillion in business value and 6.2 billion hours of worker productivity globally.

MachineLearninginMarketing-1621x1000

In South Africa, Machine Learning and AI have been talking points for many years. However, the practical implementation and application of these technologies has not quite caught up with the rest of the world. Mid-size and large local enterprises are looking to Machine Learning and AI to streamline operations, support strategic and personnel planning and gain insights such as whether certain products are performing.

While many companies are talking the talk, they often seem to forget the fundamentals – crucially:  knowing what data they have and where it is sitting. Without the Data Engineering piece of the puzzle in place, Machine Learning and AI cannot deliver on expectations.

Data Quality is a big issue in many South African organisations, and most of them are aware of the problem. Amid exponential growth in data volumes, companies have lost control over data sources and standards; they lack effective data governance and stringent controls. On top of this, while most want to optimize their data use, possibly even monetising it, many have not formulated a clear strategy for doing so.

Data quality and machine learning

Therefore, while Machine Learning and AI should definitely be on the roadmap for every South African organisation, attention has to be given to the fundamentals – including strategic planning, data quality and data governance first.

Addressing underlying issues could take as little as a few weeks, or up to several years, depending on budget, the size of the business, the amount of data involved, the technologies in use and the skills available. A data governance exercise alone could take 36 months to implement. But these are necessary processes. Embarking on an AI project before addressing underlying data quality issues could result in flawed outputs, unexpected additional costs or delays in project delivery.

In addition to assuring data quality, organisations need a clear vision on how they intend to use their data in future. If, for example, they hope to monetise their data, then in early planning, they might work backwards, considering the type of data they have, its potential value, and models for monetising it, while also taking into account the regulations around data privacy.

Once organisations have clear roadmaps, all the necessary data and they know they can trust it, making strategic decisions become a much easier task; and once the right people, processes and technologies are in place, the whole discussion becomes streamlined.

ITBusiness has long been a Data Warehouse and BI specialist in the South African market, and with that extensive knowledge and skill we assist customers to collect their data, store it intelligently, and maximise insights gained from the data collected. As the environment evolves, our approach has evolved accordingly, and we now recommend starting consultations from the outset when a customer has a data requirement.

By carrying out a full maturity assessment and gap analysis covering people, processes and technology, and by getting the basics right first, we find that data projects – and those intended to support later AI projects – are more successful and more likely to deliver the expected outcomes.

Getting to grips with data for CECL and risk forecasting best practice

Organisations have the data they need, but they must move to proper data governance and management to align with global risk forecasting models

By Mervyn Mooi, director at Knowledge Integration Dynamics (KID)CECL_1200

The Current Expected Credit Loss (CECL) model, the new Financial Accounting Standards Board (FASB) standard for estimating credit losses on financial instruments, is to be implemented from next year for publicly traded companies and from 2023 for private companies. This new model governs the recognition and measurement of credit losses for loans and debt securities. Because CECL (and compliances such as BCBS239) requires organisations to measure credit exposures and expected credit losses across the life of a loan, there are concerns that it could require more data and more careful data modelling.

As organisations around the world prepare to implement CECL, many say they may not have enough data – or the right quality data – to effectively comply. Three years ago, Moody’s research in the US found that banks foresaw challenges in terms of the data they had available to comply, and an Abrigo Lender Survey  earlier this year found that almost one in six respondents are unsure whether they have the quantity and quality of data necessary to estimate losses under the CECL standard.

But they are likely wrong.

Most major financial institutions have all the historical and current data they need to forecast losses accurately and with confidence within the CECL model. The CECL model in effect underpins best practice in reporting, loss forecasting and production of balance sheets and income statements, and therefore banks – and all businesses – should already have the foundations in place to align with the CECL model.

The challenge they may face, however, is unravelling their own data ‘spaghetti junctions’ to consolidate and report on the appropriate data.

To estimate potential losses on credit given to its customers, credit providers (such as the banks) are fully reliant on their data collation processes and storage of such relevant data which has been qualified (verified, validated, cleansed, integrated, reconciled) i.e. quality/trusted data. The reality is that such processes are often disjointed and overlapped/duplicated with differing logic which results in “many truths”. Some data is traditionally hidden across departments for internal competitive reasons, which could also complicate forecasting.

Many organisations also implement manual interventions in the collation and preparation of the data, which introduces more risk with regards to data quality. Furthermore, periodic reports may be excluding transactions in transit (in the process of being committed by the payees / lenders) or those in suspense, which will skew loss totals (i.e. overstate the losses). In many instances loss deviations are used in reports to compensate and reconcile the numbers.

In line with CECL and BCBS239 compliance, organisations have to show how losses were calculated and which governance processes were followed that approved these figures, to ultimately reveal its data lineage and prove governance.

In most organisations, the necessary data exists.

Many also have the necessary risk models and forecasting expertise in place. The problem in preparing for CECL lies in disparities in the understanding of their data, and the overall management of their data.In many cases, traditional data management evolved without proper controls in place, and over time data management was not formalised or organised, which could skew the trustworthiness of data. Where data management practices have not been able to keep up with changes, organisations have tended to skip the set, traditional frameworks and standards, which is the cause of data chaos.

To unravel the spaghetti junction and prepare the quality data needed for accurate loss and risk forecasting, organisations benefit from centralised stores of quality data and full data tracking (or lineage) and reporting capability. Enterprise-wide sharing of data supports best practice governance, compliance and risk management, but also allows organisations to better understand their market and identify growth opportunities through cross-selling and up-selling.

Enterprises also have to organise their data in a formalised fashion – and this the objective of data governance. The advent of data governance in the past few years provides all the guidelines necessary for data best practice, and uniform data management covering the rules, policies and standards to be applied to data throughout its life-cycles is the execution of this, setting in place all the foundations an organisation needs to align with CECL or any future best practice reporting or forecasting models to emerge in future.

 

 

ITBusiness appoints new general manager

ITBusiness has appointed Chris Pallikarides as its general manager. Pallikarides has been in the IT sector for 14 years.

In a statement, ITBusiness says he will be tasked with growing the firm into the next-generation solutions space.

 

Chris ITB

The company is part of the Knowledge Dynamics (KID) Group, which provides and analytics solutions.

Pallikarides says he is honoured to be appointed in his new role. “It’s a privilege to be a part of an established firm like ITBusiness and the KID Group as a whole, which is celebrating its 20th anniversary this year.”

He adds that ITBusiness will move to differentiate and add more value by becoming the next-generation of choice.

“I will be developing and rolling out the business strategy for ITBusiness to ensure a continuation of the successes the business has already experienced over its 15-year history, as well as take the business to new heights through strategic partnerships with key vendors and clients.

“While ITBusiness has built a successful business around key business intelligence solutions, data warehousing, data integration, and data management vendors and services, we see growing interest in the market for next-generation services and solutions using technologies such as artificial intelligence and machine learning.”

While at tdglobal and Med-e-Mass, he was involved mainly in the sales element of the business and gained experience in various industries, most notably healthcare, banking, finance and retail.

During his time at Axiz, he was responsible for business development of the IBM Software Business Unit.

Pallikarides holds an LLB degree from Unisa.

Metadata management is a science

Meta analytics is the new model for enabling complete data and process oversight.

By Mervyn Mooi, Director of Knowledge Integration Dynamics (KID) and represents the ICT services arm of the Thesele Group.

Data governance is crucial, and is embedded in most well-run enterprises. But most organisations are not yet maximising the full potential of their management and governance measures, in that they are not yet linking data management to governance to compliance.

data-governance-1-.png

Data management differs from governance. Data management refers to planning, building and running capabilities. Governance relates to monitoring, evaluating and directing enablers, with creation through assuring efficiencies using governance “place-holders” or gates, the latter which are entrenched in system or project management life-cycles.

Governance monitors, ensures and directs data management practices not only in the execution of processes and business activities, but also needs to help achieve efficiencies; eg, in project management and system development life-cycles.

Moving to the next level

Most governance happens at a purely technical and operational level, but to elevate governance to support high-level compliance, organisations need to link rules, regulations, policies and guidelines to the actual processes and people at operational level. Compliance is set to become ever-more challenging as organisations deal with growing volumes of data across an expanded landscape of processes.

Compliance is set to become ever-more challenging as organisations deal with growing volumes of data across an expanded landscape of processes.

AdobeStock_108492184_vector

I advocate that governance not only be addressed at technical/operational (data management) levels, but should also be linked to compliance which carries risk and drives the organisation’s strategy. Major South African enterprises are starting to realise that linking governance to compliance could support the audit process and deliver multiple business benefits at the same time.

Recently, I highlighted how data stewards were stepping up their focus on mapping governance, risk management and compliance rules to actual processes, looking to the management of meta data to provide audit trails and evidence of compliance.

Traditionally, these audit trails have been hard to come by. Auditors – many of them with a limited technical background – had to assess reams of documents and request interviews with IT to track the linkages from legislation and guidelines to actual processes. In most cases, the processes linked to are purely technical in nature.

From a regulatory compliance point of view, traditional models do not provide direct links to a particular clause in legislation or best practice guidelines, illustrating the location and management of data, including where it resides, who uses it and how, in light of the requirements of the clause or legislation. Auditors, however, need enterprises to prove lineage and articulate governance in the context of compliance.

Establishing the linkages

While enterprises typically say they are aware they could potentially link data management to governance to compliance, most do not undertake such exercises, possibly because they don’t have a mandate to do so, because they believe the tools to enable this are complex and costly, or simply because they believe the process will be too time-consuming.

Using sound methodology, this once-off exercise can take as little as two to three hours to map a process to legislation or guidelines. In the typical organisation, with around 1 000 processes, it could take less than a year to map all of them.

The organisation then gains the ability to track the processes without having to rely on elaborate business process management tools, and capture it all in Excel, store the information on any relational database and get insights: Where are the propensities, affinities, gaps and manual processes, and more importantly, what accords are they mapped to?

Mapping data is stored with timestamps and current version indicators, so if a process changes over time, or a rule, control or validation has changed, this information will be captured, indicating when it happened and where it was initiated. At the press of a button, the organisation is then able to demonstrate the exact lineage, drill down to any process within the system, and indicate where the concentration of effort lies, and where rules, conditions and checks are done within processes.

Additionally, it can attach risk weights at process level or accord level, helping shape strategy and gauge strategy execution.

Not only does this mapping give enterprises clear linkages between policies or regulations and processes, it also gives sudden new visibility into inefficiencies, the people and divisions involved in each process and more, so helping to enhance efficiencies and supporting overall organisational strategy.

With governance and compliance mandatory, it’s high time organisations moved to support governance and compliance evidence, and make the auditing process simpler and more effective.

Thesele takes 40% stake in Knowledge Integration Dynamics (Pty) Ltd. (KID)

Thesele Group (Thesele) has bought a 40% stake in Knowledge Integration Dynamics (KID), marking the investment holding company’s first foray into the ICT space, and making KID South Africa’s largest black-owned focused data management solutions company.

The multi-million rand deal, which came into effect last month, makes KID a majority black-owned entity with a Level 4 BBBEE rating, with several of its subsidiaries now 100% black-owned as well as BBBEE Level 1 rated companies.

KID co-founder and MD Aubrey van Aswegen says the investment marks the start of a new growth phase for the data management specialists. “KID has grown fairly organically over its 20-year history, but we are now approaching a point where fuelling the same pace of growth will demand a more aggressive expansion phase and possibly strategic acquisitions. Our new partnership with Thesele Group will support this growth strategy,” he says.

60478689_1949076858529640_6242531507840221184_o

Thesele, founded in 2005 by Sello Moloko and Thabo Leeuw, has a diverse investment portfolio across financial services, logistics, manufacturing and automotive industries. Thesele recently announced its acquisition of a 35% stake in South African water and wastewater solutions provider Talbot & Talbot. The KID acquisition is in line with Thesele’s long-term investment approach in existing and emerging growth sectors, says Thesele Executive Director Oliver Petersen.

Van Aswegen says KID had been in the market for a suitable BEE partner for some time. “We were looking for a suitable investor to not only improve our scorecard, but to play an active role in business development for us and bolster our growth aspirations,” he says. Thesele’s track record, networks and reputation in the investment community, along with its ethical approach to business, aligned with KID’s own culture and business model. The partnership will not be a ’passive’ one, he says. Thesele will work closely with KID to support mutually beneficial growth.

For Thesele, the investment in KID leverages several synergies, including the fact that “both entities have long operated in the financial services sector,” says Petersen. “Both groups also have the view that data and data management is a key growth area, with a wide range of opportunities in areas such as big data, the Internet of Things, automation, robotics and Artificial Intelligence.”

“This is a key milestone – not only for KID as a company, but also for its stakeholders, including staff and customers,” Van Aswegen says. “It will facilitate growth for us, and we look forward to Thesele growing their exposure to the ICT space using KID as the platform.”

About Thesele Group

https://www.thesele.co.za/pages/about-us

 

 

Data in transit raises security risks

Keeping data secure can be as daunting as herding cats, unless data governance is approached strategically.

There is no doubt data proliferation is presenting a challenge to organisations. IDC predicts the data created and shared every year will reach 180 zettabytes in 2025; and we can expect much of that data to be in transit a lot of the time.

images-16

This means it will not be securely locked down in data centres, but travelling across layers throughout enterprises, across the globe and in and out of the cloud. This proliferation of data across multiple layers is raising concern among CIOs and businesses worldwide, particularly in light of new legislation coming into play, such as the General Data Protection Regulations for Europe, due to be implemented next year.

Where traditionally, data resided safely within enterprises, it is now in motion almost constantly. Even data on-premises is transported between business units and among branches within the enterprise, presenting the risk of security chaos. The bulk of the core data is always in movement – these are enabling pieces of data moving within the local domain. At every stage and every endpoint, there is a risk of accidental or deliberate leaks.

When data is copied and transmitted via e-mail, porting or some other mechanism from one location to another, this data is not always encrypted or digitally signed. To enable this, companies require the classification of data assets against the due security measures each would require, and this is not evident in most companies today.

At the next layer, commonly described as the ‘fog’ just below the cloud, data and information travelling between applications and devices off-premises are also at risk. A great deal of data is shared in peer-to-peer networks, connected appliances or by connected cars. If this data is not secured, it too could end up in the wrong hands.

In an ever-growing data ecosystem, enterprise systems should be architected from the ground up with compliance in mind.

Most companies have data security policies and measures in place, but these usually only apply on-premises. Many lack effective measures when the data physically leaves the premises on employee laptops, mobile devices and memory sticks. These devices are then used in unsecured WiFi areas, or they are stolen or lost, putting company IP at risk. Data on mobile devices must be protected using locks, passwords, tracking micro-dots, and encryption and decryption tools.

Finally, at the cloud layer, data stored, managed and processed in the cloud is at risk unless caution is exercised in selecting cloud service providers and network security protocols, and applying effective cloud data governance.

While large enterprises are becoming well versed in ensuring data governance and compliance in the cloud, small and mid-sized enterprises (SMEs) are becoming increasingly vulnerable to risk in the IOT/cloud era.

For many SMEs, the cloud is the only option in the face of capex constraints, and due diligence might be overlooked in the quest for convenience. Many SMEs would, for example, sign up for a free online accounting package without considering who will have access to their client information, and how secure that data is.

Locking down data that now exists across multiple layers across vast geographical areas and is constantly in transit demands several measures. Data must be protected at source or ‘at the bone’. In this way, even if all tiers of security should be breached, the ultimate security is in place on the data elements at cell level itself throughout its lifecycles. Effective encryption, identity management and point in time controls are also important for ensuring data is accessible only when and where it should be available, and only to those authorised to access it.

Role and policy-based access controls must be implemented throughout the data lifecycle, and organisations must have the ability to implement these down to field and data element level.

In an ever-growing data ecosystem, enterprise systems should be architected from the ground up with compliance in mind, with data quality and security as the two key pinnacles of compliance. In addition, compliance education and awareness must be an ongoing priority.

All stakeholders, from application developers through to data stewards, analysts and business users, must be continually trained to put effective data governance at the heart of the business, if they are to maintain control over the fast-expanding digital data universe.

The wielder, not the axe, propel plunder aplenty.

By Mervyn Mooi, Director at Knowledge Integration Dynamics. (KID).
Johannesburg, 25 Sept 2014

Business intelligence is a fairly hot topic today – good news for me and my ilk – but that doesn’t mean everything about it is new and exciting. The rise and rise of BI has seen a maturation of the technologies, derived from a sweeping round of acquisitions and consolidations in the industry just a few years ago, that have created something of a standardisation of tools.

We have dashboards and scorecards, data warehouses and all the old Scandinavian-sounding LAPs: ROLAP, MOLAP, OLAP and possibly a Ragnar Lothbrok or two. And, like the Vikings knew, without some means to differentiate, everyone in the industry becomes a me-too, which means that’s what their customers ultimately get. And that makes it very hard to win battles.

Building new frameworks around tools to achieve some sense of differentiation achieves just that: only a sense of differentiation. In fact, even when it comes to measurements, most measures, indicators and references in BI today are calculated in a common manner across businesses. They typically use financial measures, such as monthly revenues, costs, interest and so on. The real difference, however, comes in preparing the data and the rules that are applied to the function.

Viking_Boat

A basic example that illustrates the point: let’s say the Vikings want to invade England and make off with some loot. Before they can embark on their journey of conquest they need to ascertain a few facts. Do they have enough men to defeat the forces in England? Do they have enough ships to get them there? Do they know how to navigate the ocean? Are their ships capable of safely crossing? Can they carry enough stores to see them through the campaign or will they need to raid settlements for food when they arrive? Would those settlements be available to them? How much booty are they likely to capture? Can they carry it all home? Will it be enough to warrant the cost of the expedition?

The simple answer was that the first time they set sail they had absolutely no idea because they had no data. It was massively risky of the type that most organisations aim to avoid these days. So before they could even begin to analyse the pros and cons they had to get at the raw data itself. And that’s the same issue that most organisations have today. They need the raw data but they don’t need it, in the Viking context, from travellers and mystics, spirits and whispers carried on the wind. It must be good quality data derived from reliable sources and a good geographic cross-section. And in preparing their facts, checking they are correct, that they come from reliable sources, that there has been case of broken telephone, that businesses will truly make a difference. Information is king in war because it allows a much smaller force to figure out where to maximise its impact upon a potentially much larger enemy. The same is true in business today.

Before the Vikings could begin to loot and pillage they had to know where they could put ashore quickly to effect a surprise raid with overwhelming odds in their favour. In business you could say that you need to know the basic facts before you drill down for the nuggets that await.

The first Viking raids grew to become larger as the information the Vikings had about England grew. Pretty soon they had banded their tribes or groups together, shared their knowledge and were working toward a common goal: getting rich by looting England. In business, too, divisions, units or operating companies may individually gain knowledge that it makes sense to share with the rest to work toward the most sought-after plunder: the overall business strategy.

Because the tools and technologies supply common functionality and businesses or implementers can put them together in fairly standard approaches as they choose, the real differentiator for BI is the data itself and how the data is prepared – what rules are applied to it before it enters the BI systems. Preparation is king.

These rules ultimately differentiate information based on wind-carried whispers or reliable reports