Data Governance

Data analytics trends in 2020

By Helena Schwenk, Market Intelligence Lead at Exasol and Michael Glenn, Market Intelligence Analyst at Exasol

One of the key technology highlights of the past decade has been the exponential growth in organisational data. Whether by accident or design, the amount of data available to every business truly exploded.

Today, many businesses — particularly in areas such as retail and financial services — cannot function without data analytics. And beyond that, there is an even broader range of organisations that now understand that success rests on their ability to become more data-driven and skilled in interpreting and managing their data.

However, being data-driven doesn’t just mean having automated dashboards and reporting – it’s about using data-based algorithms to support your analytical thinking, so you have a greater probability of making the right choice when faced with business-critical decisions.

Too many companies are still drowning in data lakes or time-consuming reporting, unable to realise the value of all the information at their disposal, because the focus is on gathering data, not making sense of it.

However, the good news from 2019 is that more and more organisations grappling with big data have seen the light and woken up to the power of a robust analytics database as a route to achieving unrivalled performance, at scale. 

As 2019 draws to a close, Exasol’s Market Intelligence Lead, Helena Schwenk, and Market Intelligence Analyst, Michael Glenn, predict what new data analytics trends will gain momentum and impact businesses in 2020 and beyond:

  1. AI adoption is limited to larger enterprises and/or those that are mature and analytically advanced 

In 2020 we will continue to see AI investments gather speed, but for most companies this will only be in narrow use-cases that allow them to pick off the low hanging fruit in their industries. For example, CPG firms are more likely to invest in physical robotics for the factory floor, and telcos will invest in customer-facing virtual agents.

The top performers will look to use AI to generate value more broadly across business lines and functions. For example, sentiment analysis can be used not only to gain a deep understanding of customer complaints, but also to inform marketing content and micro segmentation for sophisticated sales strategies. Shared sentiment around an issue will stand alongside spending patterns to determine next-to-buy models and deep marketing personalisation.

Data analytics success in 2020 will also rely heavily on organisations addressing perennial challenges such as breaking down data silos, addressing the data science skills shortage and changing cultural thinking around using data and analytics. 

Rome wasn’t built in a day, but a powerful analytics database is a wise first step to set you on the right path to becoming a truly data-driven organisation.

  • To propel wider adoption for smaller companies or those struggling with AI requires a more consistent effort to generate training data

As alluded to above, one particular barrier in place to broad adoption of AI is a lack of training data. For large tech firms like Google, Apple, and Amazon, gathering data is not an arduous task in comparison to most companies. Because of the breadth and depth of their products and services, they have a near-endless supply of diverse data streams, creating the perfect environment for their data scientists to train their algorithms. For smaller companies, access to comparable datasets is limited or simply too expensive.

Synthetic datasets will allow less advanced or smaller companies to make meaningful strides in their AI journey. Synthetic data is data that is generated programmatically. For example, realistic images of objects in arbitrary scenes rendered using video game engines or audio generated by a speech synthesis model from known text. The two most common strategies for synthetic data usage we will see are:

  1. Taking observations from real statistic distributions and reproducing fake data according to these patterns.
  2. A model is created to explain observed behavior, and then creates random data using this model. It aids in the understanding of the effects of interactions between distinct agents that are had on the system as a whole.

Companies who considered their data storage capacities to be minimal will come to the realisation that they need a sophisticated solution to house their synthetic data if they are to compete on the hard-hitting elements of machine learning.

  • For those companies already serious about AI, GPUs promise to improve the accuracy and performance of more sophisticated deep learning applications 

It is still early days for enterprise adoption of deep learning AI. Nonetheless, it does offer real opportunities for organisations to build applications that identify objects in images, recognise the spoken word and create highly accurate predictive models from vast sums of data.

Deep learning can use regular CPUs, but in 2020 for serious enterprise projects, we expect more data science teams to explore the use of specialised chips such as GPUs, that can handle massively parallel workloads to accelerate the training and retraining of models. Without these specialised chips deep learning may not be practical or economical, since the discipline requires significant amounts of compute capacity to process high volumes of data.

GPU’s parallel computing capabilities means they can be applied to both training data and inferencing, where models are used in production applications to make predictions on live data. Both are suitable candidates, however the possibility of significantly improving the performance of the more compute intensive training stage, is an obvious starting point. In these scenarios, GPUs hold real potential to transform how data science teams work with data and continually improve the speed, accuracy and intelligence of AI applications.  

About the author

Helena Schwenk is Market Intelligence Leadat Exasol. She specialises in technology trends, competitive landscapes and go-to-market strategies and uses this knowledge to keep Exasol’s marketing, sales and product management teams fully connected to the wider industry landscape. 

Schwenk also writes and presents frequently on the issues, developments and dynamics impacting data analytics technology adoption. She has over 24 years’ experience working in the data analytics field, having spent 18 years as an industry analyst specialising in Big Data, Advanced Analytics and more latterly AI, as well as 6 years working as both a former data warehousing and BI practitioner.

Michael Glenn is a Market Intelligence Analyst at Exasol. He recently joined Exasol from global research leader Forester where he spent over two and half years working as a senior research associate in fields associated with data analytics such as cloud computing, business intelligence software, robotic process automation, and innovation strategies.

About Exasol

Exasol is the analytics database. Its high-performance in-memory analytics database gives organisations the power to transform how they work with data, on-premises, in the cloud or both – and turn it into value faster, easier and more cost effectively than ever before.

To learn more about Exasol please visit www.exasol.com