So what is Machine Learning!

Machine Learning (ML) is everywhere today! If you are using any of the apps or websites for Amazon, Google, Uber, Netflix, Waze, Facebook in your daily busy life, they are all internally supported by machine learning algorithms. Most of the top companies in every sector from Aviation to Oil and Gas, Banking to Retail, Ecommerce to Transportation have products and projects using machine learning tools and technologies. Machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it.

Wikipedia defines machine learning as “a subfield of computer science (CS) and artificial intelligence (AI) that deals with the construction and study of systems that can learn from data, rather than follow only explicitly programmed instructions.” As per IBM – Machine learning is the science of how computers make sense of data using algorithms and analytic models. As per SAS – Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.

As per Stanford, Machine learning is the science of getting computers to act without being explicitly programmed. It enables cognitive systems to learn, reason and engage with us in a more natural and personalized way. The most powerful form of machine learning being used today, called “deep learning”, builds a complex mathematical structure called a neural network based on vast quantities of data. Designed to be analogous to how a human brain works, neural networks themselves were first described in the 1930s. But it’s only in the last three or four years that computers have become powerful enough to use them effectively. If deep learning will be as big as the internet, it’s time for everyone to start looking closely at it.

Machine Learning (ML) is a form of AI that facilitates a computer’s ability to learn and essentially teach itself to evolve as it becomes exposed to new and ever-changing data. The main components of ML software are statistical analysis and predictive analysis. These algorithms can spot patterns and find hidden insights based on observed data from previous computations without being programmed on where to look. They learn from every experience and interaction.

Some of the ML techniques are Linear Regression, K-means, Decision Trees, Random Forest, PCA, SVM and finally Artificial Neural Networks (ANN). Artificial Neural Networks is where the field of Deep Learning had its genesis from.

Internet of Things (IOT) will produce billions and billions of data points from billions of connected devices by 2020. Machine learning can help companies take the billions of data points they have and boil them down to what’s really meaningful. The future realization of IoT’s promise is dependent on machine learning to find the patterns, correlations and anomalies that have the potential of enabling improvements in almost every facet of our daily lives.

As per HBR, machine learning has tremendous potential to transform companies. Executives who want to get the most out of their companies’ data should understand what it is, what it can do, and what to watch out for when using it. It’s time to let the machines point out where the opportunities truly are! In our next blog, we will take a journey into the various use cases and implementation of ML in different industry.

Share with me your use cases of ML that you want to get included in my next blog!

Top 14 Machine Learning Tools for Data Scientists

IBM, Google, Microsoft, Amazon and several companies are offering Machine Learning tools and API’s as part of their cloud offering. Last year, Amazon announced open-sourcing its deep learning library, Deep Scalable Sparse Tensor Network Engine, (DSSTNE), which is now available on GitHub. Followed by Google, it opened its SyntaxNet neural network framework for developers to build applications that can process human language. In Oct 2016, IBM renamed its Predictive Analytics Services as ‘Watson Machine Learning’. The focus is to provide deeper and more sophisticated self-learning capabilities as well as enhanced model management and deployment functionality within the service.

Machine learning has taken a main stage in many advanced analytical and predictive models. If you are user of Amazon.com or Netflix, you are consuming Machine learning recommendations. Similarly most of the websites customize and personalize to your taste, behavior and style based on machine learning that learns previous patterns and enables cognitive system to learn, reason and engaged with you in natural and personalized way.

Machine Learning has entered into every industry from Retail, Manufacturing, Banking, Cable, Games and sports and Media/Entertainment to many more. Today’s machine learning uses analytic models and algorithms that iteratively learn from data, thus allowing computers to find hidden insights without being explicitly programmed where to look. This means data analysts and scientists can teach computers to solve problems without having to recode rules each time a new data set is presented. Using algorithms that learn by looking at hundreds or thousands of data samples, computers can make predictions based on these learned experiences to solve the same problem in new situations. And they’re doing it with a level of accuracy that is beginning to mimic human intelligence.

Here are some of the top Machine learning tools

  1. IBM Machine Learning – https://console.ng.bluemix.net/catalog/services/ibm-watsonmachinelearning/
  2. Microsoft Azure Machine Learning – https://azure.microsoft.com/en-us/services/machinelearning/
  3. Google Machine Learning – https://cloud.google.com/products/machinelearning/
  4. Amazon Machine Learning – https://aws.amazon.com/machinelearning/
  5. Scikit-learn – https://github.com/scikit-learn/scikit-learn
  6. Shogun – https://github.com/shogun-toolbox/shogun
  7. Mahout – https://mahout.apache.org/
  8. Apache Spark MLlib – https://spark.apache.org/mllib/
  9. Weka – http://www.cs.waikato.ac.nz/ml/weka/
  10. Cloudera – http://www.cloudera.com/training/courses/intro-machine-learning.html
  11. BigML – https://bigml.com/
  12. TensorFlow – https://www.tensorflow.org/
  13. H20 – http://www.h2o.ai/
  14. Veles – https://velesnet.ml/

Here’s the last seen Machine Intelligence Landscape published in early 2016 by Shivonzillis.com

Tell us about your ML experience with any of the tools above and any new tools that you want to share with other readers!

AI vs Machine Learning vs. Deep Learning

Artificial Intelligence, Machine Learning and Deep Learning was very much of the hype words in 2016 and going in 2017. These techniques have existed for decades but the application to business world has recently been explored in main stream. Machine learning and Deep learning are both fundamentally a form of Artificial intelligence. While the concepts of Machine learning and Deep learning have been around as early as from 1950s, they both have evolved and separated from each other in every decade with technology and experiments.

As per Stanford University, AI is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

So where exactly did AI start? Hmmm…After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.

AI has several branches. Some of them are – Search, Pattern Recognition, Logical AI, Heuristics, Genetic Programming, Epistemology, Ontology. Few of the common applications are Games, Speech Recognition, Vision, Expert Systems, Natural Language processing, Heuristic classification,

Machine Learning (ML) is a form of AI that facilitates a computer’s ability to learn and essentially teach itself to evolve as it becomes exposed to new and ever-changing data. The main components of ML software are statistical analysis and predictive analysis. These algorithms can spot patterns and find hidden insights based on observed data from previous computations without being programmed on where to look. They learn from every experience and interaction.

If you use Netflix, you will notice that the suggested movies change and personalize to your taste based on previous movie views. While machine learning has become an integral part of processing data, one of the main differences when compared to deep learning is that it requires manual intervention in selecting which features to process, whereas deep learning does it intuitively.

Some of the ML techniques are Linear Regression, K-means, Decision Trees, Random Forest, PCA, SVM and finally Artificial Neural Networks (ANN). Artificial Neural Networks is where the field of Deep Learning had its genesis from.

Deep Learning (DL) is an advanced, sophisticated branch of AI with predictive capabilities that is inspired by the brain’s ability to learn. Andrew Ng from Coursera and Chief Scientist at Baidu Research formally founded Google Brain that eventually resulted in the productization of deep learning technologies across a large number of Google services.Just as the human brain can identify an object in milliseconds, deep learning can mirror this instinct with nearly the same speed and precision. Deep learning has the nimble ability to assess an object, properly digest the information and adapt to different variants.

Deep Learning is used by Google in its voice and image recognition algorithms, by Netflix and Amazon to decide what you want to watch or buy next, and by researchers at MIT to predict the future. Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power.

Here is Gartner’s August 2016 Hype Cycle and Deep Learning isn’t even mentioned on the slide:

Google had two deep-learning projects underway in 2012. Today it is pursuing more than 1,000, according to a spokesperson, in all its major product sectors, including search, Android, Gmail, translation, maps, YouTube, and self-driving cars. IBM’s Watson system used AI, but not deep learning, when it beat two Jeopardy champions in 2011. Now, though, almost all of Watson’s 30 component services have been augmented by deep learning. most of the deep-learning applications that have been commercially deployed so far involve companies like Google, Microsoft, Facebook, Baidu, and Amazon—the companies with the vast stores of data needed for deep-learning computations.

One of the startup consulting company 7F Consulting promises to leverage Machine Learning and Deep Learning in its Strategy Advisory practice for advising C-Suite on strategy decisions. Their framework is called ‘Rananiti’ – meaning War strategy in Sanskrit! Rananiti uses proprietary solution and framework that provides a significant access to market insights with probabilistic decision advising executives on critical business decisions!

Now, other companies have started experimenting and integrating deep learning into their own day-to-day processes. Are you the next one?

AI to the rescue of C-Suite

Artificial Intelligence (AI) has been in discussion for past several decades as science fiction and experiments in research and labs. However, in past 2-3 years, AI has come in the forefront of discussion. It has started gaining adaption in C-Suite and business world. Many industries – Healthcare, Retail, Banking, Consumer Products, Insurance, Manufacturing, Auto among several others are now experimenting with AI tools and technologies.

A 2015 Tech Pro Research survey indicated that 24 percent of businesses across industries are currently using AI or had plans to do so within the year. There are three forms of AI – Assisted, Augmented and Autonomous. You can see these scenarios in auto industry that is taking cars using Assisted AI to Augmented AI with a goal to have Autonomous AI driven cars in next few months or a year.

Now, how do we apply the AI in business strategy and decision making process. Despite several advancements in Cognitive technologies in past year, having a autonomous AI for business strategy is far from reality. However, one of the leading start up management consulting firm 7F Consulting is claiming to have a solution framework to include all market news, industry regulations, economic indicators, social and competitive intelligence to become a AI Assisted strategy advisor. Imagine a solution like this that is integrated with Alexa or Siri and it can run hypothesis with decision tree for you and using the data, come back with probabilistic decision for you to consider. Over course of multiple interactions, the system learns what insights you need, what actions you take and what drives results using Management framework and delivers tailored results.

The potential of Artificial Intelligence for organizations is enormous and if the projections turn out to be true and in the coming years the AI market will grow to a multiple billion dollar market, doing business can take up a whole new meaning. Resulting in fewer employees required while significantly improving your bottom line results. We are all aware of how jobs that once were the exclusive domain of humans such as facial recognition, sarcastic comment analysis, automobile operation, and language translation are now being done with software. If your organization is not already doing so, you should encourage it to undertake pilot projects involving AI to gain experience and better understand its capabilities and, perhaps more important, its limitations.

Cognitive Use Cases for Insurance

Advances in Cognitive science, also known as Artificial Intelligence is making sweeping changes in many industries and professions today. The sheer ability to source, analyze and construct policies using individualized and personal information will facilitate underwriting and claims processing in personal and commercial lines. Cognitive science can power decisions that go beyond the current use of rule-based knowledge and the retroactive use of the results of predictive analytics.

As per HBR – the good news is that the early dividends from Cognitive computing are already within reach of most midsize companies as they look for ways to expand their digital boundaries. In fact, the building blocks of Cognitive Computing can produce great results with fewer technical requirements and less time and money than many companies realize. What’s more, those that take this initial step are getting a leg up on Cognitive Computing’s future, since that step is going to be a prerequisite for everything that follows.

Allstate Business Insurance, a division of Allstate Insurance, used the tools to develop a virtual assistant known as ABIe (pronounced “Abby”) to answer questions from its 12,000 agents. Last year, Swiss Re established dedicated Center of Competence for cognitive computing in collaboration with IBM. Along with IBM, they are developing a range of underwriting solutions that rely on IBM Watson’s cognitive computing technologies.  A Peruvian insurer—RIMAC Seguros—which plans to use Watson Content Analytics for health insurance claims processing. “When a claim is made, Watson will scan thousands of policy documents and almost instantaneously pull out the paragraphs relevant to the decision at hand,” the report said, noting that in early tests, this cut the time to process a claim by more than 90 percent.

In a media statement, IBM reported that in just two years, the percentage of C-suite leaders across all 21 industries who expect to contend with competition from outside their own industry increased to more than half—54 percent today compared to 43 percent in 2013, according to the study, “Redefining Boundaries: Insights from the Global C-Suite Study.”. Fifty-two percent of insurer CXOs and 57 percent of bank CXOs see cognitive computing on the near-term horizon. In other industries, such as professional and computing services, which like insurance are heavily staffed by knowledge workers, just 43 percent selected “cognitive,” and only 37 percent of the CXOs see cognitive as a near-term force across all industries.

What is Watson?

IBM’s Watson is a cognitive-era system that learns and understands natural language just like human beings. There is no need to program every output with complex programming logic. It is a new species that is taught by design, learns by experience, learns from interactions, gets smarter over time and makes better judgments over time.

Watson enables humans to chat with it in natural language through a feature called “Watson Personal Advisor.” The Watson Personal Advisor capability was tested for the first time during an American quiz show, “Jeopardy.”

Source: Shaping the future of insurance with IBM Watson, December 2014

Some of the use cases of Cognitive computing in Insurance:

  1. Enable Advisors to deliver cost effective personalized advice
  2. Support agents and underwriters by pre-reading millions of pages of regulations, past historical cases, and relevant information
  3. Provide evidence based advice to claims adjuster using structured and unstructured datasets (images, contact center notes, claim notes, social media posts, etc)
  4. Helps make policy pricing/underwriting during sales and marketing
  5. Help counter fraud by analyzing structured and unstructured data in real time using speech to text, notes, policies and regulations, previous claim history, etc
  6. Consolidate, Search and Analyze enterprise data
  7. Advise members/customers on claims processing and guide them through personalized process such as getting rental car, or getting an appointment with restoration service, etc.

Also, Re-insurers have used cognitive computing in areas such as client engagement, helping insurance agents, underwriting assistance and claims.

IBM’s Watson is at the forefront of a new era of cognitive computing, in which virtual advisors interact seamlessly with consumers and agents – learning by experience, making better judgments and getting smarter over time. BM’s Watson is poised to revolutionize the way the insurance industry engages with its customers – both consumers and agents. IBM Watson is enabling this disruption by leveraging cognitive computing, in which apps and systems interact seamlessly with consumers through natural language, enhancing and scaling human expertise and learning with continuing use and new information. Watson can provide a personalized, consistent experience to customers, at scale, in today’s competitive environment.

Blockchain: Basics and Hacks

A Blockchain is a digital platform that hosts a digital ledger of transactions and shares it among a distributed network of computers. The cryptography technology allows each participant on the network to manipulate the ledger in a secure way without the need for a central authority. Once a block of data is recorded on the Blockchain ledger, it’s extremely difficult to change or remove. When someone wants to add to it, participants in the network — all of which have copies of the existing Blockchain — run algorithms to evaluate and verify the proposed transaction. If a majority of nodes agree that the transaction looks valid — that is, identifying information matches the Blockchain’s history — then the new transaction will be approved and a new block added to the chain.

blockchain

Image: Financial Times

A report from financial technology consultant Aite estimated that banks spent $75 million last year on blockchain. And Silicon Valley venture capitalists are also queuing up to back it.

estimated-blockchain

Bitcoin’s Blockchain is often touted as a revolutionary step forward for network security. But August’s theft of nearly $68 million of customers’ bitcoins from a Hong-Kong-based exchange demonstrated that the currency is still a big risk.

The very Fact that all Bitcoin transactions are permanent and cannot be undone, gives hackers a free hand to steal Bitcoins and get away with it. In fact, there are a few clever tricks built in Bitcoin System so that altering a ledger entry in the blockchain invalidates all subsequent entries. So It is Practically Impossible to Undo Payments in this case “Stolen Bitcoins” unless the hacker himself agrees to return the stolen Bitcoins. There are basically two ways a hacker could hack Bitcoin System for Stealing Bitcoins. He is either able to get the Blockchain password (Wallet key) of a User or a group of user and then use it to transfer all bitcoins from users wallet to his Anonymous Wallet. Or he could actually Hijack Bitcoin Mining Pool and redirect all of its computing power to Mine Bitcoins for himself.

Kaspersky Labs and INTERPOL have presented research in which they show how blockchain-based cryptocurrencies can potentially be abused with arbitrary data that can be disseminated through its public decentralized databases. An attack on “The DAO” took place on 17th June 2016. However, believe it or not, the developers did know of the vulnerability before that date (12th of June). One of the DAO’s creators, Stephan Tual, released a blog post where he explained that even though a recursive call bug exists in a similar smart contract framework (MakerDAO), the DAO is not at risk. Whilst the developers were postponing the fix, the network was compromised and 3.6 million ETH (approximate $53 million at the time) were drained from the DAO. To put it into perspective, this was a third of its resources. Security issues will likely always be present in the Bitcoin world, and users will have to rely on cybersecurity firms to constantly innovate and provide solutions.

From IBM’s perspective, industrial-grade blockchain technologies have the following characteristics:

  • A shared, permissioned ledger is the append-only system of record (SOR) and single source of truth. It is visible to all participating members of the business network.
  • A consensus protocol agreed to by all participating members of the business network ensures that the ledger is updated only with network-verified transactions.
  • Crytography ensures tamper-proof security, authentication, and integrity of transactions.
  • Smart contracts encapsulate participant terms of agreements for the business that takes place on the network; they are stored on the validating nodes in the blockchain and triggered by transactions

IBM is a premier code-contributing member of the Hyperledger Project, which is the Linux Foundation’s open source collaborative effort to create a blockchain for business-to-business (B2B) and business-to-customer (B2C) transactions. IBM has contributed 44,000 lines of blockchain code to the Hyperledger Project. IBM’s contributed code helps developers explore the use of blockchain in the enterprise as they build secure decentralized ledgers to exchange assets of value between participants. IBM’s proposed contribution is a “low-level blockchain fabric that has been designed to meet the requirements of a variety of industry-focused use cases. It extends the learning of the pioneers in this field by addressing additional requirements needed to satisfy those broader industry use cases.

Cognitive 101

This era will redefine the relationship between man and machine – Ginni Rometty – IBM CEO

Cognitive computing is one of the most exciting developments in software technology in the past few years. Conceptually, cognitive computing focuses on enabling software models that simulate the human thought process. Cognitive computing offers fundamental differences in how systems are built and interact with humans. Cognitive-based systems, such as IBM Watson, are able to build knowledge and learn, understand natural language, and reason and interact more naturally with human beings than traditional systems. They are also able to put content into context with confidence-weighted responses and supporting evidence. More specifically, cognitive computing enables capabilities that simulate functions of the human brain such as voice, speech, and vision analysis. From this perspective, cognitive computing is becoming an essential element to enable the next wave of data intelligence for mobile and IoT solutions. Text, vision, and speech are common sources of data used by mobile and IoT solutions. Cognitive systems can quickly identify new patterns and insights. Over time, they will simulate even more closely how the brain actually works. In doing so, they could help us solve the world’s most complex problems by penetrating the complexity of big data and exploiting the power of natural language processing and machine learning.

IBM points to a survey of more than 5,000 C-suite executives by its Institute for Business Value (IBV), which found the following:

  • Sixty-five percent of insurance industry CXOs are pursuing some form of business model innovation, but nearly 30 percent feel the quality, accuracy and completeness of their data is insufficient.
  • Sixty percent of retail executives do not believe their company is equipped to deliver the level of individual experiences consumers demand, and 95 percent say they will invest in cognitive in the next five years.
  • The healthcare industry forecasts a 13 million person gap in qualified healthcare workers by 2035, and more than half of healthcare industry CXOs report that current constraints on their ability to use all available information limits their confidence about making strategic business decisions. Eighty-four percent of these leaders believe cognitive will be a disruptive force in healthcare and 95 percent plan to invest in it over the next five years.

The story was similar across all industries: executives surveyed by the IBV cited scarcity of skills and technical expertise — rather than security, privacy or maturity of the technology — as the primary barriers to cognitive adoption.

The most popular cognitive computing platform in the market, IBM Watson provides a diverse number of APIs to enable capabilities such as vision, speech, text, and data analysis. Watson is now available to developers as part of the Watson developer cloud included in Bluemix distributions.

 

Good resource to learn more about Cognitive Computing, visit IBM:

http://www.research.ibm.com/cognitive-computing/index.shtml

http://www.ibm.com/cognitive/

Cognitive Internet of Things?

Internet of Things (IoT) represents the extension and evolution of the Internet, which has great potential and prospects for modern intelligent service and applications. However the current IoT is still based on traditional static architectures and models by our deep investigation. It lacks enough intelligence and cannot comply with the increasing application performance requirements. By integrating cognition into IoT, we present a new concept of Cognitive Internet of Things (CIoT) and its corresponding intelligent architecture.

Most of the current offerings from several point solution vendors for Internet of Things (IoT) focusses on how to connect devices to see, hear, smell the physical world around and report the observations. However, I would argue that only connectivity and reporting is not enough but capability to learn, think and understand both physical, social and contextual data and apply intelligence is the key. This requirement drives us to develop a new model called “Cognitive” Internet of Things. What is Cognitive? It is more appropriate to refer to “cognition” as an “integrative field” rather than a “discipline” since the study on “cognition” integrates many fields that are rooted in neuroscience, cognitive science, computer science, mathematics, physics, and engineering, etc.

Cognitive computing is one of the most exciting developments in software technology in the past few years. Conceptually, cognitive computing focuses on enabling software models that simulate the human thought process. More specifically, cognitive computing enables capabilities that simulate functions of the human brain such as voice, speech, and vision analysis. From this perspective, cognitive computing is becoming an essential element to enable the next wave of data intelligence for mobile and IoT solutions. Text, vision, and speech are common sources of data used by mobile and IoT solutions.

As per IEEE, Cognitive Internet of Things is a new network paradigm, where (physical/virtual) things or objects are interconnected and behave as agents, with minimum human intervention, the things interact with each other following a context-aware perception-action cycle, use the methodology of understanding-by-building to learn from both the physical environment and social networks, store the learned semantic and/or knowledge in kinds of databases, and adapt themselves to changes or uncertainties via resource-efficient decision-making mechanisms, with two primary objectives in mind:

  • bridging the physical world (with objects, resources, etc) and the social world (with human demand, social behavior, etc), together with themselves to form an intelligent physical-cyber-social (iPCS) system;
  • enabling smart resource allocation, automatic network operation, and intelligent service provisioning

The development of IoT depends on dynamic technical innovations in a number of fields, from wireless sensors to nanotechnology. Without comprehensive cognitive capability, IoT is just like an awkward stegosaurus: all muscle, no brains. To fulfill its potential and deal with growing challenges, we must take the cognitive capability into consideration and empower IoT with high-level intelligence.