Cognitive Computing: From hype and pilot to going mainstream implementation

Cognitive Computing started in late 1960’s with initial innovation around supercomputers. However the decision making capability, simulation and self learning intelligence was limited. With major advances in computing power, cognitive behavior science, computational intelligence, the researchers in Cognitive Computing made significant advancement. With advanced medical science and neuropschology, computer scientists were able to study the mechanics of human brain and that allowed the scientists to build computative models modeled after human mind. These models would allow association with various attributes and parameters from past experiences into cognitive systems. Cognitive computing and Decision Science was born with scientists developing computers that that operated at a higher rate of speed and accuracy than the human brain did.

As per IBM, Cognitive computing systems learn and interact naturally with people to extend what either humans or machine could do on their own. They help human experts make better decisions by penetrating the complexity of Big Data. Big Data growth is accelerating as more of the world’s activity is expressed digitally. Not only is it increasing in volume, but also in speed, variety and uncertainty. Most data now comes in unstructured forms such as video, images, symbols and natural language – a new computing model is needed in order for businesses to process and make sense of it, and enhance and extend the expertise of humans. Rather than being programmed to anticipate every possible answer or action needed to perform a function or set of tasks, cognitive computing systems are trained using artificial intelligence (AI) and machine learning algorithms to sense, predict, infer and, in some ways, think.

Cognitive computing reassess the nature of relationships between multiple variables and the environment factors. It can help assess the nature of the relationship between people and their increasingly pervasive today’s digital environment. They may play the role of assistant or coach for the user, or they may act virtually autonomously in many problem-solving situations. The boundaries of the processes and domains these systems will affect are still elastic and emergent. They must learn as attribures and variables changes, and as questions and decisions evolve. They must resolve ambiguity and tolerate unpredictability. They must be engineered to feed on dynamic data in real time, or near real time. They must interact easily with users so that those users can define their needs comfortably. They may also interact with other processors, devices and cloud services, as well as with people. They must aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They must “remember” previous interactions in a process and return information that is suitable for the specific application at that point in time. They must understand, identify and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory or sensor-provided). The output from Cognitive systems may be prescriptive, suggestive, instructive or simply entertaining.

Cognitive informatics is a cutting-edge and multi-disciplinary research field that tackles the fundamental problems shared by modern informatics, computation, software engineering, AI, computational intelligence, cybernetics, cognitive science, neuropsychology, medical science, systems science, philosophy, linguistics, economics, management science, and life sciences. The development and the cross fertilization between the aforementioned science and engineering disciplines have led to a whole range of emerging research areas known as cognitive informatics.

Sample Cognitive Computing Use Cases

  1. Shopping: Cognitive computing systems’ ability to evaluate and generate hypotheses will help retail industries to find patterns, correlations and insights in mountains of unstructured and structured data. Watson’s app development platform is already moving into this physical-virtual space. The startup, Fluid, has layered Watson on top of its Expert Personal Shopper app for retail brands. Watson will be your personal shopping assistant. Store associates will also have similar intelligent tech providing them instant product information, customer loyalty data, sales histories, user reviews, blogs and magazines, so that when you do need to talk with another human, they know exactly how to help.
  2. Medical: evidence-based learning, hypothesis generation and natural-language skills could help medical professionals make key decisions in patient diagnosis and treatment. The objective is to give Doctors and Surgeons a quick way to access diagnostic and treatment options based on updated research.
  3. Banking: In fraud detection, financial institutions could have cognitive tools that enable them to go beyond analyses of cardholders’ credit transaction histories; cognitive computing might provide them with new “associational” intelligence, such as when an individual is most likely to make purchases, what he is likely to buy, and under what circumstances.
  4. Finance: Benefits from Cognitive Computing also will be seen by financial advisors, including individuals that handle their own portfolios, as the technology enables bringing together relevant, current and personalized information and remembering questions and preferences
  5. Weather Forecasting and Planning: Weather forecasting can benefit from Cognitive Computing and big data analytics. For instance, IBM’s Deep Thunder, a research project that creates precise, local weather forecasts, can predict severe storms in a specific area up to three days before the event. This early-warning system gives local authorities and residents enough time to make preparations

As per Deloitte University Press – Cognitive analytics can help address some key challenges. It can improve prediction accuracy, provide augmentation and scale to human cognition, and allow tasks to be performed more efficiently (and automatically) via context-based suggestions. For organizations that want to improve their ability to sense and respond, cognitive analytics offers a powerful way to bridge the gap between the promise of big data and the reality of practical decision making.

IBM has calculated that the market size for cognitive computing services in the private sector is in the neighborhood of $50 billion. At present, there are very few vendors in the field. While IBM has announced the creations of the Watson Group to commercialize cognitive computing and Google has acquired AI startup Deepmind, there are few companies in the space. Much of the work is still happening at a university level or within research organizations. Cognitive computing is still early from a commercialization perspective. It is likely another three to five years before an industry wide adoption and its impact on a wide range of companies. For a while at least, cognitive computing will fill a niche in industries where highly complex decision making is the norm, such as healthcare and financial markets. Eventually, it will become a normal tool in every corporate toolbox, helping to augment human decision making.

Top 10 Big Data and Analytics Predictions for 2015

Predictions for 2015

Forrester’s forecast that Hadoop will become an “enterprise priority” in the next 12 months, International Data corp. has just gazed into its own crystal ball, and sees a future where spending on Big Data analytics is set to grow three times faster in 2015.

Here’s our prediction of Top 10 Big Data and Analytics trends for 2015

  1. Rise of Chief Data Officer and Data Scientists positions
  2. Cloud Computing and Cloud Data Warehouse will grow adoption
  3. Data Visualization tools will become very popular with business users with advanced visualization capabilities
  4. Data Integration Hub at most large and mid-scale enterprise will expand to include unstructured data and incorporate advanced and predictive analytics using machine learning and advanced capabilities such as Watson
  5. Integration with Enterprise Mobile Apps and Big data technology on the rise
  6. Organizations will move from data lakes to processing data platform. Self-service Big Data with predictive analytics and advanced visualizations will go mainstream
  7. Companies and organizations will struggle to find data engineers and data scientist talent. Many legacy ETL/Integration and EDW resources will fill the gap.
  8. The Internet of Things (IoT) will be the next critical focus for data/analytics services
  9. Data Providers will evolve rapidly into new business model
  10. Data Privacy and security will gain momentum in implementation

While we can’t know for sure if each of these things will come true, we do know that the world of big data is changing. It’s no longer about just having access to data and the ability to store it, but instead the ability to achieve actionable results with data through predictive and prescriptive analytics. Only time will tell how this evolves, but if you aren’t leveraging data to compete and win, it’s time to get on board. The big data and analytics market will reach $125 billion worldwide in 2015, according to IDC and if you are not investing or planning to invest, you are already behind your competitor or losing your spot from being an A player.

Making the most of what you have

Many, many companies have built very sophisticated Data Warehouses -They should start using what they’ve got a little more effectively before moving on to tougher things!

So there I was in an ICA store in Stockholm, a huge trolley of goods for the weekend and dead pleased that eventually I got to the front of the queue. It was Saturday, everyone was in a hurry to get home after queuing for ages on the Stockholm motor ways. My partner was diligently packing the goods because it was my turn to pay so imagine my horror when my debit card was rejected – not once, but three times. Crikey, everyone was looking at me as if I was some sort of crook. Well luckily my partners AMEX card came to the rescue but imagine my concern. I kept thinking of the £20k balance in my account and wondering what had happened to it.

In panic on the way home I missed an incoming SMS but got the second when I got back and was horrified to see the number of my bank come up – well I assumed this, as in fact it was actually some random call centre somewhere on planet Earth. I answered it (at my cost as I was roaming) to be told that this was a routine security check because the behavior on my card had proved concerning (to who and why is a mystery as you will see). I was asked to agree the last few transactions of my card to verify that these were correct and not fraudulent: They were:

Currency exchange (at Heathrow)

A purchase at Heathrow of around £30 (two bottles of champers)

Purchase of an airline ticket – UK to Sweden.

Well I confirmed all of this and was simply informed that my card would now start working again – no explanation, no nothing – unbelievable. My card had been refused at a grocery but imagine what could have happened!

Now you might ask yourself a question, why is this guy moaning about this? Well why I’m moaning is that for the two years previous to this incident I had been travelling to Sweden at least once every six weeks – I invariably change money, always buy champagne and always buy an air ticket so why did my bank see this as unusual?  Why weren’t they using some system to check that in fact this was quite a usual style of activity – nothing unusual here? Why has this bank got the authority to arbitrarily stop me using my own money, none the less in such an preposterous manner?

Well, the bank I am talking about was a pioneer in Data Warehousing so I’m just wondering why this event happened when I know that they diligently record all my transactions and store them in a DW whilst apparently failing to understand their meaning. No need for Hadoop here!!!!

Regulation – a class of Big Data apps

There are bad guys out there!

 

Going back to the gist of my last post, one of the pillars that underpinned de-regulation was the idea that companies would work in a ‘correct’ manner and regulate themselves. The truth is that this worked and still does work very well for 95% of companies but there are always bad pennies committing fraud or simply not being careful in accounting practices. Thanks to a few well known financial disasters, even before the global meltdown, the concept of re-regulation loomed large across many industries. There are many sets of rules that are now in place to bring governance to company business – some of the more well known include Sarbanes-Oxley and Basel II and III which have been around for a little while now. We might ask ourselves what do they have in common and the answer is that both and many more such initiatives, demand that very accurate and accountable numbers are produced quickly from very complex underlying data – the need for Business Intelligence rears its head once again and the term ‘Big Data’ can certainly be applied to some of these initiatives.

 

Re-regulation demands that some very complex numbers are delivered:

 

  • Quickly
  • Accurately
  • Transparently

 

 

Throw into the pot that the data needed often as not comes from tens or even hundreds of operational systems distributed across the world and that some of these initiatives need very complex predictive modelling and detailed segmentation and we see a new class of Big Data applications.

Natural Selection in Business – Does using Big Data provide a sustainable advantage?

 

In nature, when resources are plentiful, species live together quite amicably. Even predator and prey reach a satisfactory balance whereby there is always food for both. However, when resources are scarce, species that were once happy together often turn into bitter enemies. The strong, big guy’s fight each other, determined to completely obliterate their competitor often resulting in mortal damage being inflicted on both. Whilst this is happening, the intelligent guys, who are inevitably smaller and physically weaker, get to work. Firstly, they take advantage of the preoccupation of the others by amassing their basic requirements quickly. They then diversify and find a niche for themselves, knowing that competition will come, but being determined to foresee it and avoid it where possible.

 

Most people accept that this is the way of the natural world and business dynamics tend to follow the same basic rules. Intelligent companies will not measure themselves by numbers of employees, amount of real estate or revenue alone, but will instead increasingly judge themselves on different values:

 

  • The average life time value of their key customers
  • The elapsed time for a new customer to become profitable
  • Public image
  • Customer retention
  • Knowledge, expertise and willingness of the work force
  • Brand awareness and flexibility
  • Environmental friendliness
  • Efficient and focused work practices
  • Customer satisfaction

 

Note: be aware that the little guys don’t always have to take on the big guys directly and in fact it’s usually best not too. Those of you who know the story about David and Goliath should be clear that this was not a simple big guy versus little guy competition in which David shows the world not to be afraid of a ‘larger’ opponent. The fact is that Goliath, although being big, had no noticeable weaponry whilst David however, had the equivalent in those days, of a sawn off shotgun. My guess is that if the two guys had met with equal weapons the result would have been rather less romantic but David showed some real common-sense here. He knew that if he wasn’t prepared for the fight he had no chance so he fought the battle very much on his own terms.