Our new logo!

TBDI New 4


So what is Machine Learning!

Machine Learning (ML) is everywhere today! If you are using any of the apps or websites for Amazon, Google, Uber, Netflix, Waze, Facebook in your daily busy life, they are all internally supported by machine learning algorithms. Most of the top companies in every sector from Aviation to Oil and Gas, Banking to Retail, Ecommerce to Transportation have products and projects using machine learning tools and technologies. Machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it.

Wikipedia defines machine learning as “a subfield of computer science (CS) and artificial intelligence (AI) that deals with the construction and study of systems that can learn from data, rather than follow only explicitly programmed instructions.” As per IBM – Machine learning is the science of how computers make sense of data using algorithms and analytic models. As per SAS – Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.

As per Stanford, Machine learning is the science of getting computers to act without being explicitly programmed. It enables cognitive systems to learn, reason and engage with us in a more natural and personalized way. The most powerful form of machine learning being used today, called “deep learning”, builds a complex mathematical structure called a neural network based on vast quantities of data. Designed to be analogous to how a human brain works, neural networks themselves were first described in the 1930s. But it’s only in the last three or four years that computers have become powerful enough to use them effectively. If deep learning will be as big as the internet, it’s time for everyone to start looking closely at it.

Machine Learning (ML) is a form of AI that facilitates a computer’s ability to learn and essentially teach itself to evolve as it becomes exposed to new and ever-changing data. The main components of ML software are statistical analysis and predictive analysis. These algorithms can spot patterns and find hidden insights based on observed data from previous computations without being programmed on where to look. They learn from every experience and interaction.

Some of the ML techniques are Linear Regression, K-means, Decision Trees, Random Forest, PCA, SVM and finally Artificial Neural Networks (ANN). Artificial Neural Networks is where the field of Deep Learning had its genesis from.

Internet of Things (IOT) will produce billions and billions of data points from billions of connected devices by 2020. Machine learning can help companies take the billions of data points they have and boil them down to what’s really meaningful. The future realization of IoT’s promise is dependent on machine learning to find the patterns, correlations and anomalies that have the potential of enabling improvements in almost every facet of our daily lives.

As per HBR, machine learning has tremendous potential to transform companies. Executives who want to get the most out of their companies’ data should understand what it is, what it can do, and what to watch out for when using it. It’s time to let the machines point out where the opportunities truly are! In our next blog, we will take a journey into the various use cases and implementation of ML in different industry.

Share with me your use cases of ML that you want to get included in my next blog!

Top 14 Machine Learning Tools for Data Scientists

IBM, Google, Microsoft, Amazon and several companies are offering Machine Learning tools and API’s as part of their cloud offering. Last year, Amazon announced open-sourcing its deep learning library, Deep Scalable Sparse Tensor Network Engine, (DSSTNE), which is now available on GitHub. Followed by Google, it opened its SyntaxNet neural network framework for developers to build applications that can process human language. In Oct 2016, IBM renamed its Predictive Analytics Services as ‘Watson Machine Learning’. The focus is to provide deeper and more sophisticated self-learning capabilities as well as enhanced model management and deployment functionality within the service.

Machine learning has taken a main stage in many advanced analytical and predictive models. If you are user of Amazon.com or Netflix, you are consuming Machine learning recommendations. Similarly most of the websites customize and personalize to your taste, behavior and style based on machine learning that learns previous patterns and enables cognitive system to learn, reason and engaged with you in natural and personalized way.

Machine Learning has entered into every industry from Retail, Manufacturing, Banking, Cable, Games and sports and Media/Entertainment to many more. Today’s machine learning uses analytic models and algorithms that iteratively learn from data, thus allowing computers to find hidden insights without being explicitly programmed where to look. This means data analysts and scientists can teach computers to solve problems without having to recode rules each time a new data set is presented. Using algorithms that learn by looking at hundreds or thousands of data samples, computers can make predictions based on these learned experiences to solve the same problem in new situations. And they’re doing it with a level of accuracy that is beginning to mimic human intelligence.

Here are some of the top Machine learning tools

  1. IBM Machine Learning – https://console.ng.bluemix.net/catalog/services/ibm-watsonmachinelearning/
  2. Microsoft Azure Machine Learning – https://azure.microsoft.com/en-us/services/machinelearning/
  3. Google Machine Learning – https://cloud.google.com/products/machinelearning/
  4. Amazon Machine Learning – https://aws.amazon.com/machinelearning/
  5. Scikit-learn – https://github.com/scikit-learn/scikit-learn
  6. Shogun – https://github.com/shogun-toolbox/shogun
  7. Mahout – https://mahout.apache.org/
  8. Apache Spark MLlib – https://spark.apache.org/mllib/
  9. Weka – http://www.cs.waikato.ac.nz/ml/weka/
  10. Cloudera – http://www.cloudera.com/training/courses/intro-machine-learning.html
  11. BigML – https://bigml.com/
  12. TensorFlow – https://www.tensorflow.org/
  13. H20 – http://www.h2o.ai/
  14. Veles – https://velesnet.ml/

Here’s the last seen Machine Intelligence Landscape published in early 2016 by Shivonzillis.com

Tell us about your ML experience with any of the tools above and any new tools that you want to share with other readers!

AI vs Machine Learning vs. Deep Learning

Artificial Intelligence, Machine Learning and Deep Learning was very much of the hype words in 2016 and going in 2017. These techniques have existed for decades but the application to business world has recently been explored in main stream. Machine learning and Deep learning are both fundamentally a form of Artificial intelligence. While the concepts of Machine learning and Deep learning have been around as early as from 1950s, they both have evolved and separated from each other in every decade with technology and experiments.

As per Stanford University, AI is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

So where exactly did AI start? Hmmm…After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.

AI has several branches. Some of them are – Search, Pattern Recognition, Logical AI, Heuristics, Genetic Programming, Epistemology, Ontology. Few of the common applications are Games, Speech Recognition, Vision, Expert Systems, Natural Language processing, Heuristic classification,

Machine Learning (ML) is a form of AI that facilitates a computer’s ability to learn and essentially teach itself to evolve as it becomes exposed to new and ever-changing data. The main components of ML software are statistical analysis and predictive analysis. These algorithms can spot patterns and find hidden insights based on observed data from previous computations without being programmed on where to look. They learn from every experience and interaction.

If you use Netflix, you will notice that the suggested movies change and personalize to your taste based on previous movie views. While machine learning has become an integral part of processing data, one of the main differences when compared to deep learning is that it requires manual intervention in selecting which features to process, whereas deep learning does it intuitively.

Some of the ML techniques are Linear Regression, K-means, Decision Trees, Random Forest, PCA, SVM and finally Artificial Neural Networks (ANN). Artificial Neural Networks is where the field of Deep Learning had its genesis from.

Deep Learning (DL) is an advanced, sophisticated branch of AI with predictive capabilities that is inspired by the brain’s ability to learn. Andrew Ng from Coursera and Chief Scientist at Baidu Research formally founded Google Brain that eventually resulted in the productization of deep learning technologies across a large number of Google services.Just as the human brain can identify an object in milliseconds, deep learning can mirror this instinct with nearly the same speed and precision. Deep learning has the nimble ability to assess an object, properly digest the information and adapt to different variants.

Deep Learning is used by Google in its voice and image recognition algorithms, by Netflix and Amazon to decide what you want to watch or buy next, and by researchers at MIT to predict the future. Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power.

Here is Gartner’s August 2016 Hype Cycle and Deep Learning isn’t even mentioned on the slide:

Google had two deep-learning projects underway in 2012. Today it is pursuing more than 1,000, according to a spokesperson, in all its major product sectors, including search, Android, Gmail, translation, maps, YouTube, and self-driving cars. IBM’s Watson system used AI, but not deep learning, when it beat two Jeopardy champions in 2011. Now, though, almost all of Watson’s 30 component services have been augmented by deep learning. most of the deep-learning applications that have been commercially deployed so far involve companies like Google, Microsoft, Facebook, Baidu, and Amazon—the companies with the vast stores of data needed for deep-learning computations.

One of the startup consulting company 7F Consulting promises to leverage Machine Learning and Deep Learning in its Strategy Advisory practice for advising C-Suite on strategy decisions. Their framework is called ‘Rananiti’ – meaning War strategy in Sanskrit! Rananiti uses proprietary solution and framework that provides a significant access to market insights with probabilistic decision advising executives on critical business decisions!

Now, other companies have started experimenting and integrating deep learning into their own day-to-day processes. Are you the next one?

AI to the rescue of C-Suite

Artificial Intelligence (AI) has been in discussion for past several decades as science fiction and experiments in research and labs. However, in past 2-3 years, AI has come in the forefront of discussion. It has started gaining adaption in C-Suite and business world. Many industries – Healthcare, Retail, Banking, Consumer Products, Insurance, Manufacturing, Auto among several others are now experimenting with AI tools and technologies.

A 2015 Tech Pro Research survey indicated that 24 percent of businesses across industries are currently using AI or had plans to do so within the year. There are three forms of AI – Assisted, Augmented and Autonomous. You can see these scenarios in auto industry that is taking cars using Assisted AI to Augmented AI with a goal to have Autonomous AI driven cars in next few months or a year.

Now, how do we apply the AI in business strategy and decision making process. Despite several advancements in Cognitive technologies in past year, having a autonomous AI for business strategy is far from reality. However, one of the leading start up management consulting firm 7F Consulting is claiming to have a solution framework to include all market news, industry regulations, economic indicators, social and competitive intelligence to become a AI Assisted strategy advisor. Imagine a solution like this that is integrated with Alexa or Siri and it can run hypothesis with decision tree for you and using the data, come back with probabilistic decision for you to consider. Over course of multiple interactions, the system learns what insights you need, what actions you take and what drives results using Management framework and delivers tailored results.

The potential of Artificial Intelligence for organizations is enormous and if the projections turn out to be true and in the coming years the AI market will grow to a multiple billion dollar market, doing business can take up a whole new meaning. Resulting in fewer employees required while significantly improving your bottom line results. We are all aware of how jobs that once were the exclusive domain of humans such as facial recognition, sarcastic comment analysis, automobile operation, and language translation are now being done with software. If your organization is not already doing so, you should encourage it to undertake pilot projects involving AI to gain experience and better understand its capabilities and, perhaps more important, its limitations.

Cognitive Use Cases for Insurance

Advances in Cognitive science, also known as Artificial Intelligence is making sweeping changes in many industries and professions today. The sheer ability to source, analyze and construct policies using individualized and personal information will facilitate underwriting and claims processing in personal and commercial lines. Cognitive science can power decisions that go beyond the current use of rule-based knowledge and the retroactive use of the results of predictive analytics.

As per HBR – the good news is that the early dividends from Cognitive computing are already within reach of most midsize companies as they look for ways to expand their digital boundaries. In fact, the building blocks of Cognitive Computing can produce great results with fewer technical requirements and less time and money than many companies realize. What’s more, those that take this initial step are getting a leg up on Cognitive Computing’s future, since that step is going to be a prerequisite for everything that follows.

Allstate Business Insurance, a division of Allstate Insurance, used the tools to develop a virtual assistant known as ABIe (pronounced “Abby”) to answer questions from its 12,000 agents. Last year, Swiss Re established dedicated Center of Competence for cognitive computing in collaboration with IBM. Along with IBM, they are developing a range of underwriting solutions that rely on IBM Watson’s cognitive computing technologies.  A Peruvian insurer—RIMAC Seguros—which plans to use Watson Content Analytics for health insurance claims processing. “When a claim is made, Watson will scan thousands of policy documents and almost instantaneously pull out the paragraphs relevant to the decision at hand,” the report said, noting that in early tests, this cut the time to process a claim by more than 90 percent.

In a media statement, IBM reported that in just two years, the percentage of C-suite leaders across all 21 industries who expect to contend with competition from outside their own industry increased to more than half—54 percent today compared to 43 percent in 2013, according to the study, “Redefining Boundaries: Insights from the Global C-Suite Study.”. Fifty-two percent of insurer CXOs and 57 percent of bank CXOs see cognitive computing on the near-term horizon. In other industries, such as professional and computing services, which like insurance are heavily staffed by knowledge workers, just 43 percent selected “cognitive,” and only 37 percent of the CXOs see cognitive as a near-term force across all industries.

What is Watson?

IBM’s Watson is a cognitive-era system that learns and understands natural language just like human beings. There is no need to program every output with complex programming logic. It is a new species that is taught by design, learns by experience, learns from interactions, gets smarter over time and makes better judgments over time.

Watson enables humans to chat with it in natural language through a feature called “Watson Personal Advisor.” The Watson Personal Advisor capability was tested for the first time during an American quiz show, “Jeopardy.”

Source: Shaping the future of insurance with IBM Watson, December 2014

Some of the use cases of Cognitive computing in Insurance:

  1. Enable Advisors to deliver cost effective personalized advice
  2. Support agents and underwriters by pre-reading millions of pages of regulations, past historical cases, and relevant information
  3. Provide evidence based advice to claims adjuster using structured and unstructured datasets (images, contact center notes, claim notes, social media posts, etc)
  4. Helps make policy pricing/underwriting during sales and marketing
  5. Help counter fraud by analyzing structured and unstructured data in real time using speech to text, notes, policies and regulations, previous claim history, etc
  6. Consolidate, Search and Analyze enterprise data
  7. Advise members/customers on claims processing and guide them through personalized process such as getting rental car, or getting an appointment with restoration service, etc.

Also, Re-insurers have used cognitive computing in areas such as client engagement, helping insurance agents, underwriting assistance and claims.

IBM’s Watson is at the forefront of a new era of cognitive computing, in which virtual advisors interact seamlessly with consumers and agents – learning by experience, making better judgments and getting smarter over time. BM’s Watson is poised to revolutionize the way the insurance industry engages with its customers – both consumers and agents. IBM Watson is enabling this disruption by leveraging cognitive computing, in which apps and systems interact seamlessly with consumers through natural language, enhancing and scaling human expertise and learning with continuing use and new information. Watson can provide a personalized, consistent experience to customers, at scale, in today’s competitive environment.