Big data, defined as a collection of data in term of value, velocity and variation as to require technologies and specific methods for the extraction of value, are the biggest game-changing opportunity for the business since the internet went mainstream almost 20 years ago. The organizations by the Big data can measure, and know, radically more about their business and directly translate that knowledge into improved decision making performance, in particular are using big data for to target customer-centric outcomes, tap into internal data and a better information ecosystem. With the introduction of Big Data the need for more advanced data visualization capabilities increase; in fact the organization have need to see analytical results presented visually, find relevance among the millions of variable, communicate concept and hypothesis to others, and even predict the future. The visualization-based data take the challenges presented by “Three VS” of Big data and turn them in opportunity of growth. The true value of big data, however, lies in the implicit valuable knowledge derived from the analysis of a group of interrelated data sets, which allow deep correlations and hidden principles to be found for business prediction; the true value of Big data is the possibility transformation of “Big data” into “big insights”. The Big Data have changed long standing ideas about the volume of experience, the nature of expertise, and the practise of management, and only the Smart leader will see using big data for what it is: management of revolution.
Computing is an essential part of big data analysis. Mathematical and statistical methods, in order to be applied, must be implemented as programs and software systems executed on computer platforms in such a way to provide reliable data analysis in an efficient way. This results in the crucial relevance of being able to exploit suitable technologies and tools to gather, store, organize, and manage huge amounts of data, on one side, and, moreover, to perform fast computations on such data, possibly through parallel techniques on distributed computer architectures. Students shall be introduced to problems, methods, and tools to effectively implement and apply data analysis algorithms and methods both in small-size and in high-performance computing environments. Specific frameworks such as dealing with text data or with data from social networks shall also be considered.
This pillar targets the understanding and practical operation of modern, virtualized, cloud-based networks, able to instantiate secure services on-the-fly, run them anywhere in the network and shift them transparently to different locations. The pillar will make students familiar with four subjects:
- Security and Privacy: basic concepts and their applications
- Cloud and Mobile/Edge Cloud: main architecture and operation, services and platforms
- Monitoring and Processing for the Internet of People and Machines: technologies and tools for data generation and collection, e.g. machine-generated data
- Network Virtualization and Softwarization: data center architectures and current virtualization paradigms, and their impact on the service design/deployment/management chain
This pillar focuses on the statistical analysis of High-Dimensional Data (HDD), a framework where the number of variables is larger than the number of observations. Nowadays HDD are pandemic in almost of any branch of knowledge, including basic sciences, biology medicine, business, engineering, economics, finance, medicine, etc. For instance, HDD are used to assess the effectiveness of new therapies, to monitor biological and climate phenomena, to optimise processes in industry and in administrations, to analyse consumer behaviour, to forecast macroeconomic and financial variables, etc. The goal of the pillar is to endow students with concepts and methods in both supervised and unsupervised statistical learning and in the analysis of large dimensional dynamic systems.