Skip to content. | Skip to navigation

Personal tools


You are here: Home / Meetings / All Meetings / Interface 2013 Conference: Symposium on Big Data and Analytics

Interface 2013 Conference: Symposium on Big Data and Analytics

Chapman University Symposium on Big Data and Analytics: The 44th Symposium on the Interface of Computing Science and Statistics. Kirk Borne is organizing a session "Astroinformatics: Learning from Data in the Astronomical Sciences" and welcomes possible speakers (January response needed). The joint Chapman-Interface symposium welcomes scientific contribution dealing with Big Data to advance core scientific and technological means of managing, analyzing, visualizing, and extracting useful information from large and diverse data sets.
When 04 April 2013 06:30 AM to
06 April 2013 11:30 PM
Where Orange CA USA
Add event to calendar vCal

The Interface Symposia series and statisticians, in general, have long recognized a need for large scale data collection and analysis. Recent interface symposia have focused on data-related themes such as: 1. Future of Statistical Computing: Internet Scale Data, Flexible Modeling, and Visualization; 2. Statistical, Machine Learning, and Visualization Algorithms; 3. Massive Data Sets and Streams; 4. Security and Infrastructure Protection; and 5. Frontiers of Data Mining and Bioinformatics. The 44th Symposium will continue this tradition with Big Data subthemes on Earth Systems Science and Healthcare Systems Challenges and how these subthemes will draw on and participate in the Big Data initiative. Interests include topics such as computational statistics, statistical software, exploratory data analysis, data mining, pattern recognition, scientific visualization and related fields with applications to Earth Systems Science and Healthcare Systems.

It should be noted that the scale of what is considered Big Data has been increasing steadily. Kilobytes (103), megabytes (106), gigabytes (109), and terabytes (1012) by now are familiar to any researcher using modern computer resources. The Earth Observing System of NASA introduced serious consideration of petabytes (1015). Data collection systems looming on the horizon such as the Large Synoptic Survey Telescope ( promise data on the scale of exabytes (1018). It is conceivable that data collection methods in the future may generate data sets of the scale of zettabytes (1021) and yottabytes (1024). The issue with big data is that computing power doubles every 18 months (Moore’s Law), I/O bandwidth increases about 10% every year, but the amount of data doubles every year. It is clear that conventional distributed systems such as those employed by Google, Facebook, and JPL (distributed active archive centers) must be expanded to include such new technologies as hadoop ( and new analysis methods. The 44th Interface Symposium will focus on aspects of these Big Data issues.

More information about this event…