By: Paul Pilotte, Technical Marketing Manager, MathWorks
Over the last decade, data science has made its mark on almost every industry (in addition to IT): retail, manufacturing, industrial equipment, transport, healthcare, insurance, etc. Across the board, organisations are acknowledging the importance of using data to stay agile and competitive.
The ways in which data is driving organisations have also been evolving quite rapidly. Most uses of business analytics are “post mortem” i.e. crunching historical data to examine past performance to look for patterns of failure or success in an effort to avoid or replicate these. This is termed as descriptive analytics. Then the exponential increase in data in the last decade, more so with IoT enabled devices, took big data to the next level with predictive analytics. Here, historical data (both internal and external) is combined with rules and algorithms to predict likely outcomes. Today it is possible to develop a real-time system to minimize HVAC energy costs in large-scale commercial buildings via proactive, predictive optimization. BuildingIQ has developed Predictive Energy Optimization (PEO), a cloud-based software platform that reduces HVAC energy consumption by 10–25% during normal operation. Another example is how Baker Hughes is developing advanced directional drilling services which incorporate algorithms that help oil and gas operators place wellbores precisely in their reservoirs. Running on an embedded processor amid the intense vibrations in the downhole environment, these algorithms accurately measure the inclination and azimuth of the borehole as it is drilled.
The promise, potential and practicality of big data continue to grow. To leverage these to make an impactful business decision, practitioners will increasingly adopt prescriptive analytics, which not only predicts future outcomes but also computes strategic and actionable recommendations on how best to leverage the outcome. An example which can be called out is how Scania engineers used MATLAB tools for Model-Based Design to model and simulate the driver support system, develop a prototype user interface, and generate embedded code for prototype and production targets. Other examples of this include decision support systems for energy trading, financial portfolio optimization etc.
Furthermore, the advances in the tools and technology that support big data applications will make it easier for even traditional (non-tech) companies to develop in-house data science talent. A perpetual problem for these companies is that they are forced to bring specialised data scientists on board to manage their data. While this cohort may excel at number crunching to look for a solution, they are not well-equipped to articulate the problem that needs solving, since that requires domain expertise (for example about specific manufacturing processes). This leads to a situation where despite the data and the science, the problems peculiar to their operations are not addressed.
More sophisticated machine learning (or artificial intelligence) workflows will become accessible, making it more cost-effective for businesses to train existing domain experts rather than risk finding and onboarding data scientists. For example, transfer learning will mitigate the need for large training sets and NVidia GPU instances on Amazon EC2 will make it easy for anyone to get started with deep learning in minutes. Transfer learning allows deep learning models to be more easily and rapidly customized without the need for large labeled datasets. As an analogy, someone who has mastered the guitar will be better at learning the piano than the one who has never played a musical instrument before, because the guitar learning experience improves the piano learning process. The learning and insights gleaned from one data set can be transferred to another problem, mitigating the need to build a new labeled data set from scratch. Tools like these will make data science easily applicable for cross-industry problems and fuel adoption across the board.
The IoT (Internet of Things) is expanding and bringing online billions of “chatty” devices (sensors, cameras, TVs, refrigerators, wearables, etc.) which means so much more data to aggregate and manage. These hybrid data sets (mix of text, video, images, etc.) and diverse IT systems will push the demand for converged systems, interoperability, smart data and predictive algorithms that can work seamlessly across them.
There is a lot to look forward to in the coming year. Data science and technology promises to deliver converged workflows and solutions that will provide wider applicability in experiential ways to help organisations streamline operations, secure their assets, build actionable business insights and develop competitive strategies.