You are using an unsupported browser. For best results please use the latest versions of Chrome, Edge, Firefox or Safari.

What’s the Big Deal About Big Data?

Published on: August 15, 2013

techknowfile2013Paul Zikopoulos is Director of Technical Professionals for IBM Software Group’s Information Management Division and leads the World Wide Competitive Database and Big Data Technical Sales Acceleration teams. An award-winning writer and speaker with more than 19 years experience in Information Management, he has written 350 magazine articles, 16 books and received nearly as many accolades.

At TechKnowFile 2013 Keynote, Paul Zikopoulos led the audience on a roller-coaster ride through Big Data, accompanied by the scenery of a complex PowerPoint show, and narrated with colorful analogies.  “*The Airbus 380,” he began, showing the plane on screen, “about to takeoff from Heathrow. By the time it lands at JFK, it will have generated over 640 Terabytes of data. And what happens to it? Nothing. Nothing, unless there is a disaster. It just drops to the floor.” This data on the floor is a missed opportunity, Zikopoulos argued, pointing out that “we are guilty of not knowing what we already know,” because we collect so much information and then do not have a way of analyzing it.

Zikopoulos defines the primary characteristic of big data is its velocity, arguing that the key to big data is not how fast it is produced, altered or delivered to the institution, but how fast the institution can act on the information. He argued for a shift in analytics of big data from at-rest to in-motion, a technique he called “nowcasting.” That integration is difficult, he acknowledged, but well addressed by Hadoop, which flattens the technical challenge, dividing the data and the analytics across servers and processors.

The highest performing institutions have a high “Analytics  IQ,” an integration of information foundation and analytic applications.

As Marden Paul would emphasize again in his TKF13 session, Zikopoulos argued that IT needs to deviate from tradition, emphasizing discovery and exploration to boost the analytic IQ. Traditional analytics are structured and repeatable, he argued; users determine questions and the IT Team builds a system to answer those questions. He called for an “iterative and exploratory” approach, where IT delivers data on a flexible platform, which users can use to explore and ask anything.

Zikopoulos advocated the use of Hadoop and Map Reduce (the programming paradigm) to separate the signals from the noise. Zikopoulos pointed to various goals of big data imperatives, including better exploration of data; an enhanced view of the customer (or user); improvement of intelligence and an ability to monitor security in real time; operations analysis, via machine data analysis for improved business results; and data warehouse augmentation for increased operational efficiency.

Although his focus was largely business models, Zikopoulos provided examples of the ways in which Big Data analytics could be used in the academic world. For instance, Xikopoulos pulled in 4000 tweets related to the University of Toronto on Twitter and created classifications based on residences. He argued that data analytics such as this could be used to see if students feel disenfranchised from the University by the culture of their residence? We could analyze text messages and posts to social media to identify cases of extreme stress or the points at which students drop out of courses or out of school.

A highly complex and fascinating presentation with many examples and illustrations, Zikopoulos presented Big Data as a fascinating opportunity for institutions to better understand and serve their users.

By: Elizabeth O’Gorek, Freelance Writer for ITS