Being passionate about data can return extraordinary rewards, both in the intrinsic satisfaction of generating knowledge by applying Artificial Intelligence (AI) techniques as well as in potential “knowledge services” we can generate to deliver significant value to our clients. In the last decade, there have been a number of developments that have affected how we think about AI applications, such as:

  • The ability to generate massive amounts of digital data
  • Cloud computing being made affordable, resilient and scalable for both storing and processing digital data at large scale
  • The open source community actively implementing AI libraries for several programming languages

The first two milestones are indistinctly referred to as Big Data. In the two images below, we can see that the increased interest about Big Data is correlated with an increase in trends in AI topics.

 

Worldwide interest on Big Data and AI over time.
Worldwide interest on AI libraries over time.

This correlation makes sense if we consider that Big Data has broadened the source of supplies and processing power for AI use cases such as: Recommendation Engines, Image Matching, Time Series Forecasting, Marketing Personalization, etc.

But does this raise in analytic activities reflect a raise in benefits that organizations have actually obtained from them?

For some organizations, the answer is definitely yes. For others, it’s too early to measure the long term benefits. However, for most enterprises, the competitive advantage of Data Oriented applications is out of their reach. Here we’ll look at the 3 most common barriers organizations face in applying their data for competitive advantage and accelerated company growth.

Barrier One: Financial Resources

Many organizations do not invest in hiring the roles needed to profit from their data initiatives. The minimum tech profiles necessary to build a team dedicated to deliver professional data applications are: Business Analyst, Data Scientist, (Big) Data Engineer and DevOps Engineer.

On average, each of these members will increase the payroll of the company by $100K per year.

When it comes to software, data integration and analytical tools are available in open source and proprietary licensing. However, if your team prefers private licensed software, you will need to budget about another $150K per year.

And of course you need hardware to train the models, run the applications and store the enriched datasets. On average, you could spend around $20K per year in cloud computing to fulfill this purpose.

When you add all this up, an initial investment of roughly $420K to $570K with an unclear perspective of the potential benefits might be a risky venture for any company attempting to leverage their data for competitive advantage.

Barrier Two: Data Team Expertise

The complexity and governance of the raw input of data to repositories may drastically vary from one project to another. In some cases, raw data will present insufficient conditions to AI models and converge to a stable result. In other circumstances, the source data may contain numerous variables without a suitable selection or reduction. In this case, the AI models will not converge successfully.

The competence of a Data Scientist is critical to revert unfavorable conditions of input data. But it is also the case with the other roles in the organization. For instance, an inappropriate business perspective from any team member could lead to the development of irrelevant KPIs.

Barrier Three: The Sensitivity and Privacy of Data

Needless to say, clients’ data is their major asset and unauthorized access to it could carry serious financial and legal consequences to the business.

This might be the main reason why many organizations are reluctant to outsource Data Science as a service. And those that take the initiative of doing these analytical activities on their own take a serious risk because the skills required to truly benefit from company data requires individuals who are solely focused on the generation of knowledge and value from raw data.

Fortunately, we can count on a mathematical technique called Principal Component Analysis (PCA) that may be applied to safely encode the clients’ input data, preserving its algebraic properties untouched.

With this obfuscated data set, but algebraically consistent, external consultants are able to successfully develop AI models and applications that generate enriched information without knowing the original data. In the end, customers will only need to decode the obfuscated enriched data to return it to its original format and then get their valuable insights.

At Intersys we have many talented consultants that can help you deliver an effective short-term AI use case to take your data strategy to the next level and achieve competitive advantage.

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *