Computer Engineering – Study plan

The Master’s Degree in Computer Engineering is organized in 4 flexible
curricula covering various aspects of modern information processing systems
and representing the scientific excellence of the Department of Information Engineering

Artificial intelligence and robotics

Like the steam engine or electricity in the past, AI and Robotics are transforming our world, our society and our industry. Artificial Intelligence is changing the way we manage and we interact not only with digital data, but also with the real world. Robots are becoming everyday more reliable, flexible and pervasive in our life. Growth in computing power, availability of data and progress in algorithms have turned AI and Robotics into the most strategic technologies of the 21st century.
The AIRO curriculum, beside the foundational courses on AI and Robotics, provides a rich interdisciplinary set of courses addressing the different aspects of the multi-disciplinary science of AI and Robotics: from computer vision, to distributed systems, from industrial robot design to control theory for the physical stability of embodied agents. The focus of the curriculum is on the algorithms and the processing which enable to transform the information into intelligent actions. 

Mandatory subjects

In this course you will be introduced to the theory of computation, a branch of computer science which investigates the fundamental capabilities and limitations of computers. You will learn to model computation through abstract concepts such as formal languages, automata and grammars, and to answer basic questions regarding structural and computational properties of problems.

This course is an introduction to Mathematical Optimization theory and algorithms.
It mainly covers Linear Programming, Integer Linear Programming, and Graph Theory. Recent solution techniques (such as the branch-and-cut algorithm) are also covered.

What does it mean to learn from data? And how can a machine do it? The course answers such questions, presenting the fundamentals and basic principles of supervised/unsupervised learning. Topics include regression, classification, linear models, neural networks, and deep learning. The course includes hands-on experience with real data.

In this course students learn some fundamental techniques in the field of Artificial Intelligence for the solution of difficult problems. The main topics addressed in the course are research techniques in a space of solutions, systems with constraints, preference reasoning in a multi-agent context, and reasoning with uncertainty.

The course lets the students develop a thorough competence about image acquisition, processing and understanding, to extract relevant information from visual data. It starts from the principles of image formation and low-level image processing, then moves towards higher-level algorithms and systems, including deep networks. The course also stimulates the students to develop computer vision systems to face challenging real world applications, thanks to the laboratory lectures, that will make use of C++ programming and the OpenCV library.

Robot may be considered intelligent if they are provided with  a processing architecture to perceive the surrounding environment and to deliberate, learn and achieve complex goals based on this information. The course will provide the foundations for mastering the perception-action loop in embodied agents acting in the real world.
Students will acquire comprehensive knowledge on how to program intelligent agents (i.e., vision systems, mobile robots, and industrial robots) for collaborative tasks like navigation and manipulation in a Industry 4.0 scenario.

Elective subjects (study plan rules may apply)

Students will gain foundational knowledge of deep learning algorithms, the new generation Artificial Neural Networks (ANN), which boosted the field of machine learning. The course provides students with a solid theoretical background and hands-on skills to understand and develop deep networks. A variety of architectures will be presented, from the traditional Feed Forward (FF) and Recurrent Neural Networks (RNNs), to the more recent Convolutional Neural Networks (CNNs) and Long Short-Term Memories (LSTMs).

The analysis of big data is of paramount importance for a wide spectrum of contexts and applications. In the course you will learn fundamental methodologies and algorithmic tools for the effective and efficient processing of massive datasets, and will do practical experience with state-of-the-art programming frameworks such as Spark.

The course provides the main mathematical competencies in the field of robotics. The main topics addressed in the course are basic concepts of robotics (kinematic and dynamic models), illustration of advanced control schemes for position and force control of robot manipulators (robust control, adaptive control, learning control, impedance control)

With the prevalence of network data collected in several domains, ranging from biology to social networks, learning from networks has become an essential task in many applications. The course introduces both basic and advanced algorithms to learn from networked data. Topics include network analytics and finding patterns in networks.

The main models and methodologies for Distributed Systems design and deployment are fully presented together with some modern applications. Topics span from architectures and services to deal with heterogeneity, scalability, fault tolerance, security and dependability, to Transactional Systems and High-Throughput and Data-Intensive Computing.

The course will provide the student with the ability to develop robotics applications in the industrial sector. Models that define complex mechanical systems, equipped with open kinematic chain (manipulating robots), used in industrial robotics applications, will be studied and simulated. The problems of choosing the main components and optimizing the layout of robotic work cells will be addressed, learning to use the typical commands of the programming languages ​​of industrial robots.

In this course you will be introduced to the field of natural language processing, a branch of artificial intelligence which deals with the analysis of natural language texts. You will learn to develop and train statistical and neural network models that are capable of understanding, summarising, translating and extracting information from large amounts of text documents.

The growing popularity of 3D sensors is opening new trends in 3D modelling, reconstruction, motion estimation and object recognition. This course will focus on 3D data processing ranging from sensors and algorithms for reconstruction to the recognition and visualisation of 3D data, applied to different domains with a special attention to robotics and augmented reality.

Other choices (study plan rules may apply)

Neurorobotics is a groundbreaking topic that aims at exploiting human neural signals to drive robotic devices. The course will introduce students to the field of brain-machine interfaces and it will provide theoretical and practical tools for analyzing and decoding brain signals and for translating them into actions of external actuators

Quality has a key role in all the living phases of software as well as industrial products: from the initial idea, through the definition of the user experience and needs, during the design and developement stages, in the management of customer relationships, and beyond its life where circular economy needs arise.
The course follows a flipped classroom approach; and by a number of case-studies in dept considered during the lessons provides the audience the tools, approaches and methods an ict manager or designer is needed to daily use during his professional carrer.

Cognitive Services are a set of machine learning algorithms available in the cloud, that help software developers to create artificial intelligent applications. This course teaches the concepts, methods, and technologies at the basis of cloud based APIs, SDKs and services to apply Cognitive Services to the design and implementation of artificial intelligent applications.

Game theory is the science of analyzing multi-objective multi-agent problems (i.e., “games”). This involves the games we usually play for fun in our everyday life, but in a more serious context is applied to resource competition, distributed management, efficient allocation over multi-user systems and/or communication networks. This course teaches all the basic concepts, as well as some advanced ones, of game theory. Also, it applies them to scenarios of interest for ICT.

The course aims at developing attitudes and skills toward Innovation and Entrepreneurship: management of Innovation skills and new venture creation skills.
Contents: Opportunity identification, Feasibility analysis, Market analysis, Concept generation, Intellectual Property Rights, Financials of companies, Business Plan, Fundraising.

In this course, a specific NP-hard optimization problem (e.g., the Travelling Salesman Problem) is studied and solved using alternative state-of-the-art methods. Both exact (branch-and-cut based on commercial Mixed-Integer solvers such as IBM ILOG CPLEX) and heuristic algorithms are described, fully implemented in C/C++, and tested on a set of instances taken from the literature.

This course provides knowledge of the concepts of the “IoT” and “Smart cities,” describing their scientific and market trends, as well as the application of these paradigms in practical ICT context. The students will learn about some key platforms and standards (ZigBee, 6LoWPAN, WiFi, Bluetooth Low Energy, SigFox, Lo-Ra), and will review their applications for home automation, industrial applications, autonomous driving, urban monitoring, privacy and security.

Music, multimedia and interaction are at the core of ICT-led innovation in cultural and creative industries.
In this course we study the whole sound and music communication chain, through computational approaches, and learn to model, process and understand multimedia and affective information content with multidisciplinary methodologies.

The growing dependence of modern society from information systems makes security a critical aspect of computer engineering. In this course we will adopt an hands-on approach to grasp the fundamentals of security in software, network, web and mobile security. We will analyze attack techniques and study available countermeasures.

Others (study plan rules may apply)

Bioinformatics

The analysis of modern biological, physical, and biomedical data sets requires advanced computational skills to address the several challenges that arise due to the noisy nature of such data, to the intricate relations among different components the data, and, in many cases, to its sheer size. The Bioinformatics curriculum provides basic and advanced computational tools and algorithms to tackle such challenges. The educational offer provides an introduction to algorithmic and statistical techniques for the analysis of modern biological datasets (e.g., from high-throughput sequencing), to the application of such techniques to selected problems in biology, and to the computational techniques required to extract useful information from networked data. The course choices allow the student to strengthen the core competencies with relevant subjects in computer engineering and to explore related areas from biology and biomedicine.

Mandatory subjects

In this course you will be introduced to the theory of computation, a branch of computer science which investigates the fundamental capabilities and limitations of computers. You will learn to model computation through abstract concepts such as formal languages, automata and grammars, and to answer basic questions regarding structural and computational properties of problems.

What does it mean to learn from data? And how can a machine do it? The course answers such questions, presenting the fundamentals and basic principles of supervised/unsupervised learning. Topics include regression, classification, linear models, neural networks, and deep learning. The course includes hands-on experience with real data.

Bioinformatics is the study and development of computational methods to analyse large collections of biological data, such as genomic sequences, to understand complex biological systems. The course will focus on computer science aspects of computational molecular biology like genomes alignment, pattern discovery and the analysis of sequencing data.

Statistical inference, that is the ability to infer general conclusions based on data, is crucial in data analysis. The course is an introduction to statistical thinking and provides the foundational and standard tools for statistical inference, covering topics such as statistical hypothesis testing and commonly used statistical tests.

Computational genomics is an interdisciplinary field that develops methods for understanding biology, in particular when the data sets are large and complex, such as molecular data including DNA and RNA sequence and expression data.
In this course you will learn methodologies for statistical analysis and data mining of high-throughput genomic data and how to apply these methodologies to real data using algorithms and software tools.

With the prevalence of network data collected in several domains, ranging from biology to social networks, learning from networks has become an essential task in many applications. The course introduces both basic and advanced algorithms to learn from networked data. Topics include network analytics and finding patterns in networks.

Elective subjects (study plan rules may apply)

The course teaches how to design and develop an application for the management of structured data over time. Students will gain competences concerning databases, their design by using the entity-relationship model, the relational data model, and formal languages for querying a database. Students will be able to carry out an real-world project for the design and development of a database application using a relational database management system (RDBMS), Structured Query Language (SQL), and Java.

In this course students learn some fundamental techniques in the field of Artificial Intelligence for the solution of difficult problems. The main topics addressed in the course are research techniques in a space of solutions, systems with constraints, preference reasoning in a multi-agent context, and reasoning with uncertainty.

The analysis of big data is of paramount importance for a wide spectrum of contexts and applications. In the course you will learn fundamental methodologies and algorithmic tools for the effective and efficient processing of massive datasets, and will do practical experience with state-of-the-art programming frameworks such as Spark.

The objective is to learn the methodologies for Web design and development, practicing them through the implementation of an actual application. Students will acquire a strong computer science competence on Web engineering, design methodologies and architectural alternatives. Students will learn the characteristics of Web 1.0 and Web 2.0 applications and they will develop a real application by using Java servlets, REST Web services, Javascript, CSS3 and HTML5.

The main models and methodologies for Distributed Systems design and deployment are fully presented together with some modern applications. Topics span from architectures and services to deal with heterogeneity, scalability, fault tolerance, security and dependability, to Transactional Systems and High-Throughput and Data-Intensive Computing.

Mastering advanced algorithmic techniques is a skill central to both big data analysis and high-performance computing, where the quest for efficiency is of paramount importance. The course spans topics such as computational intractability, approximation, and randomization, with applications to graph and data analytics, and cryptography.

The growing dependence of modern society from information systems makes security a critical aspect of computer engineering. In this course we will adopt an hands-on approach to grasp the fundamentals of security in software, network, web and mobile security. We will analyze attack techniques and study available countermeasures.

Other choices (study plan rules may apply)

The course deals with the analysis of biosignals, i.e., signals generated by human activity. Topics include the extraction of information from quasi-periodic signals; clustering of biosignals to classify users; unsupervised learning to perform quantization; statistical structures, and supervised learning, for pattern and classification problems.

The course covers analytical and synthetic engineering methodologies for the study of the central nervous system, and their potential and limitations in the study of pathophysiological brain processes. Topics include cerebral hemodynamics, brain activation maps and connectivity, map generation from PET images, clustering, PCA and ICA, diffusion tensor MRI.

In this course, a specific NP-hard optimization problem (e.g., the Travelling Salesman Problem) is studied and solved using alternative state-of-the-art methods. Both exact (branch-and-cut based on commercial Mixed-Integer solvers such as IBM ILOG CPLEX) and heuristic algorithms are described, fully implemented in C/C++, and tested on a set of instances taken from the literature.

This course presents principles and techniques related to DNA sequencing technology and genome sequencing. The main topics addressed are functional genomics (transcriptomics, gene prediction and annotation), the analysis of polymorphisms, genome resequencing, and systems biology. The students will learn to make NGS libraries and will apply bioinformatic approaches for the analysis genomic data.

Others (study plan rules may apply)

High performance and big data computing

In almost every sector of society, spanning from retailing to finance, from travel to telecommunication, from basic research to advanced engineering, successful endeavors must exploit the availability of large volumes of (structured and unstructured) data through the ability to perform complex analytics to extract significant information, and the capacity to do so efficiently by means of advanced computing systems. The HPBDC curriculum provides the fundamental skills required to approach these tasks. The course offer aims at providing, on the one hand, a solid foundation in problem solving and data analysis through the study of algorithmic techniques, optimization and statistical methods for learning from data, and, on the other, more targeted expertise on the leading techniques for high performance and big data computing.

Mandatory subjects

In this course you will be introduced to the theory of computation, a branch of computer science which investigates the fundamental capabilities and limitations of computers. You will learn to model computation through abstract concepts such as formal languages, automata and grammars, and to answer basic questions regarding structural and computational properties of problems.

This course is an introduction to Mathematical Optimization theory and algorithms.
It mainly covers Linear Programming, Integer Linear Programming, and Graph Theory. Recent solution techniques (such as the branch-and-cut algorithm) are also covered.

What does it mean to learn from data? And how can a machine do it? The course answers such questions, presenting the fundamentals and basic principles of supervised/unsupervised learning. Topics include regression, classification, linear models, neural networks, and deep learning. The course includes hands-on experience with real data.

The analysis of big data is of paramount importance for a wide spectrum of contexts and applications. In the course you will learn fundamental methodologies and algorithmic tools for the effective and efficient processing of massive datasets, and will do practical experience with state-of-the-art programming frameworks such as Spark.

Mastering advanced algorithmic techniques is a skill central to both big data analysis and high-performance computing, where the quest for efficiency is of paramount importance. The course spans topics such as computational intractability, approximation, and randomization, with applications to graph and data analytics, and cryptography.

The course presents a theoretical framework for the design and employment of parallel computing systems (e.g., multiprocessors, GPU, FPGA). Topics include: design and analysis of parallel algorithms and architectures, parallel programming (with lab. activities), joint optimization of algorithm and architecture for integrated circuits.

Statistical inference, that is the ability to infer general conclusions based on data, is crucial in data analysis. The course is an introduction to statistical thinking and provides the foundational and standard tools for statistical inference, covering topics such as statistical hypothesis testing and commonly used statistical tests.

Elective subjects (study plan rules may apply)

The course teaches how to design and develop an application for the management of structured data over time. Students will gain competences concerning databases, their design by using the entity-relationship model, the relational data model, and formal languages for querying a database. Students will be able to carry out an real-world project for the design and development of a database application using a relational database management system (RDBMS), Structured Query Language (SQL), and Java.

In this course students learn some fundamental techniques in the field of Artificial Intelligence for the solution of difficult problems. The main topics addressed in the course are research techniques in a space of solutions, systems with constraints, preference reasoning in a multi-agent context, and reasoning with uncertainty.

Bioinformatics is the study and development of computational methods to analyse large collections of biological data, such as genomic sequences, to understand complex biological systems. The course will focus on computer science aspects of computational molecular biology like genomes alignment, pattern discovery and the analysis of sequencing data.

The course teaches how to design and develop systems for searching, retrieving, and ranking information at Web scale. Students will learn the theoretical and technical foundations of information retrieval, focusing on reference applications like search engines, recommender systems, and social media. Students will engage in the development of a real-world application, by using advanced retrieval models, semantic search, learning-to-rank, neural and online learning techniques.

Students will gain foundational knowledge of deep learning algorithms, the new generation Artificial Neural Networks (ANN), which boosted the field of machine learning. The course provides students with a solid theoretical background and hands-on skills to understand and develop deep networks. A variety of architectures will be presented, from the traditional Feed Forward (FF) and Recurrent Neural Networks (RNNs), to the more recent Convolutional Neural Networks (CNNs) and Long Short-Term Memories (LSTMs).

With the prevalence of network data collected in several domains, ranging from biology to social networks, learning from networks has become an essential task in many applications. The course introduces both basic and advanced algorithms to learn from networked data. Topics include network analytics and finding patterns in networks.

The main models and methodologies for Distributed Systems design and deployment are fully presented together with some modern applications. Topics span from architectures and services to deal with heterogeneity, scalability, fault tolerance, security and dependability, to Transactional Systems and High-Throughput and Data-Intensive Computing.

Other choices (study plan rules may apply)

The main goal of the Cryptography course is to give an overview of the theoretical basis of the field in order to allow a critical study of the cryptographic protocols used in many applications (authentication, digital commerce).

In the first part we will give the mathematical basic tools (essentially from elementary and analytic number theory) that are required to understand modern public-key methods. In the second part we will see how to apply this know-how to study and criticize some protocols currently used.

Computational genomics is an interdisciplinary field that develops methods for understanding biology, in particular when the data sets are large and complex, such as molecular data including DNA and RNA sequence and expression data.
In this course you will learn methodologies for statistical analysis and data mining of high-throughput genomic data and how to apply these methodologies to real data using algorithms and software tools.

Game theory is the science of analyzing multi-objective multi-agent problems (i.e., “games”). This involves the games we usually play for fun in our everyday life, but in a more serious context is applied to resource competition, distributed management, efficient allocation over multi-user systems and/or communication networks. This course teaches all the basic concepts, as well as some advanced ones, of game theory. Also, it applies them to scenarios of interest for ICT.

In this course you will be introduced to the field of natural language processing, a branch of artificial intelligence which deals with the analysis of natural language texts. You will learn to develop and train statistical and neural network models that are capable of understanding, summarising, translating and extracting information from large amounts of text documents.

This is a theoretical course providing knowledge of the main mathematical tools and modeling techniques for the study of telecommunication networks and networking protocols. It covers the theoretical basics of Markov chains, renewal processes, queueing theory and traffic models, with applications to the analysis of networking protocols.

In this course, a specific NP-hard optimization problem (e.g., the Travelling Salesman Problem) is studied and solved using alternative state-of-the-art methods. Both exact (branch-and-cut based on commercial Mixed-Integer solvers such as IBM ILOG CPLEX) and heuristic algorithms are described, fully implemented in C/C++, and tested on a set of instances taken from the literature.

Others (study plan rules may apply)

Web information and data engineering

“Data is the new oil” is an often-heard quote and it well reflects the paramount importance and pervasiveness of data-related needs in every sector of our society and industry. Future professionals need to fully master algorithms, methods, techniques and architectures to store, manage, access, search, recommend, link, and share both structured and unstructured data at Web scale, in an effective and efficient way. The WIDE curriculum offers deep knowledge in databases, Web applications, search engines, recommender systems, semantic technologies, distributed systems, and security. The course offer will provide the students with wide-reaching competencies and skills in impacting domains as, for example, health, cultural heritage, intellectual property, multilingual and multimodal information access, social media, e-commerce.

Mandatory subjects

In this course you will be introduced to the theory of computation, a branch of computer science which investigates the fundamental capabilities and limitations of computers. You will learn to model computation through abstract concepts such as formal languages, automata and grammars, and to answer basic questions regarding structural and computational properties of problems.

This course is an introduction to Mathematical Optimization theory and algorithms.
It mainly covers Linear Programming, Integer Linear Programming, and Graph Theory. Recent solution techniques (such as the branch-and-cut algorithm) are also covered.

What does it mean to learn from data? And how can a machine do it? The course answers such questions, presenting the fundamentals and basic principles of supervised/unsupervised learning. Topics include regression, classification, linear models, neural networks, and deep learning. The course includes hands-on experience with real data.

The course teaches how to design and develop systems for searching, retrieving, and ranking information at Web scale. Students will learn the theoretical and technical foundations of information retrieval, focusing on reference applications like search engines, recommender systems, and social media. Students will engage in the development of a real-world application, by using advanced retrieval models, semantic search, learning-to-rank, neural and online learning techniques.

The objective is to learn the methodologies for Web design and development, practicing them through the implementation of an actual application. Students will acquire a strong computer science competence on Web engineering, design methodologies and architectural alternatives. Students will learn the characteristics of Web 1.0 and Web 2.0 applications and they will develop a real application by using Java servlets, REST Web services, Javascript, CSS3 and HTML5.

Database II provides advanced methods and tools to manage and query vast quantities of structured data at a fast rate. The course tackles the theoretical and technical foundations of graph databases and knowledge bases. We study cutting-edge techniques to access structured data in natural language and to deal with data provenance and citation, data pricing, and data quality.

Elective subjects (study plan rules may apply)

The course teaches how to design and develop an application for the management of structured data over time. Students will gain competences concerning databases, their design by using the entity-relationship model, the relational data model, and formal languages for querying a database. Students will be able to carry out an real-world project for the design and development of a database application using a relational database management system (RDBMS), Structured Query Language (SQL), and Java.

Nowadays, computer engineers are asked to properly manage the architectural patterns and the paradigms for the design of complex software systems based on software platforms. Topics span from Service Oriented Architecture (SOA), micro-services for software platforms, virtual machine and containers up to tools for composition and orchestration.

Statistical inference, that is the ability to infer general conclusions based on data, is crucial in data analysis. The course is an introduction to statistical thinking and provides the foundational and standard tools for statistical inference, covering topics such as statistical hypothesis testing and commonly used statistical tests.

The analysis of big data is of paramount importance for a wide spectrum of contexts and applications. In the course you will learn fundamental methodologies and algorithmic tools for the effective and efficient processing of massive datasets, and will do practical experience with state-of-the-art programming frameworks such as Spark.

The main models and methodologies for Distributed Systems design and deployment are fully presented together with some modern applications. Topics span from architectures and services to deal with heterogeneity, scalability, fault tolerance, security and dependability, to Transactional Systems and High-Throughput and Data-Intensive Computing.

Real-time and concurrent programming issues are deeply investigated to foster strong competences in the design and managing of real-time systems. Topics include real-time operating systems, their specific scheduling policies, analysis of periodic processes, feasibility and safety conditions for real-time systems.

The growing dependence of modern society from information systems makes security a critical aspect of computer engineering. In this course we will adopt an hands-on approach to grasp the fundamentals of security in software, network, web and mobile security. We will analyze attack techniques and study available countermeasures.

Music, multimedia and interaction are at the core of ICT-led innovation in cultural and creative industries.
In this course we study the whole sound and music communication chain, through computational approaches, and learn to model, process and understand multimedia and affective information content with multidisciplinary methodologies.

In this course you will be introduced to the field of natural language processing, a branch of artificial intelligence which deals with the analysis of natural language texts. You will learn to develop and train statistical and neural network models that are capable of understanding, summarising, translating and extracting information from large amounts of text documents.

Other choices (study plan rules may apply)

The main goal of the Cryptography course is to give an overview of the theoretical basis of the field in order to allow a critical study of the cryptographic protocols used in many applications (authentication, digital commerce).

In the first part we will give the mathematical basic tools (essentially from elementary and analytic number theory) that are required to understand modern public-key methods. In the second part we will see how to apply this know-how to study and criticize some protocols currently used.

This course presents machine learning and signal processing technologies that can be used handling digital evidences in investigations. The aim is to provide students with a set of analysis strategies for hard disk, network streams, image and video data (including authentication and fake detection), data from social networks. These techniques are reviewed and discussed both theoretically and via some case studies. The course entails the intervention of legal specialists that describe the procedural implications of the technical analysis and the juridical consequences on expert decisions.

Quality has a key role in all the living phases of software as well as industrial products: from the initial idea, through the definition of the user experience and needs, during the design and developement stages, in the management of customer relationships, and beyond its life where circular economy needs arise.
The course follows a flipped classroom approach; and by a number of case-studies in dept considered during the lessons provides the audience the tools, approaches and methods an ict manager or designer is needed to daily use during his professional carrer.

The course presents the fundamental principles of communication networks, starting from the characterization of Markovian fading channels for the analysis of high layer protocols operating over wireless links, hybrid link layer technologies integrating retransmissions and coding, the analysis of the TCP protocol, its modern variants and its performance in wireless networks. The student will become knowledgeable about modern centralized and distributed wireless systems such as IEEE 802.11 (a/g/n/h/ac) and Wireless Sensor Networks. For such systems, emphasis will be given to the description and, where possible, to the performance analysis of channel access and routing techniques for decentralized wireless networks.

In modern communication systems, securing against malicious behavior is a primary issue, and must be part of the design since the beginning, rather than a patch added as a belated measure. The class introduces fundamental notions and tools in information security, with a focus on the solutions, attacks, and countermeasures that can be deployed at the different layers in modern communication networks. The students will be asked to apply their acquired knowledge to practical use cases, industrial standards, and experimental scenarios.

In this course, a specific NP-hard optimization problem (e.g., the Travelling Salesman Problem) is studied and solved using alternative state-of-the-art methods. Both exact (branch-and-cut based on commercial Mixed-Integer solvers such as IBM ILOG CPLEX) and heuristic algorithms are described, fully implemented in C/C++, and tested on a set of instances taken from the literature.

Others (study plan rules may apply)